Skip to main content

Home/ TOK Friends/ Group items tagged best

Rss Feed Group items tagged

14More

Choose to Be Grateful. It Will Make You Happier. - The New York Times - 2 views

  • Building the best life does not require fealty to feelings in the name of authenticity, but rather rebelling against negative impulses and acting right even when we don’t feel like it. In a nutshell, acting grateful can actually make you grateful.
  • some people are just naturally more grateful than others. A 2014 article in the journal Social Cognitive and Affective Neuroscience identified a variation in a gene (CD38) associated with gratitude. Some people simply have a heightened genetic tendency to experience, in the researchers’ words, “global relationship satisfaction, perceived partner responsiveness and positive emotions (particularly love).” That is, those relentlessly positive people you know who seem grateful all the time may simply be mutants.
  • Evidence suggests that we can actively choose to practice gratitude — and that doing so raises our happiness.
  • ...11 more annotations...
  • , researchers in one 2003 study randomly assigned one group of study participants to keep a short weekly list of the things they were grateful for, while other groups listed hassles or neutral events. Ten weeks later, the first group enjoyed significantly greater life satisfaction than the others
  • acting happy, regardless of feelings, coaxes one’s brain into processing positive emotions. In one famous 1993 experiment, researchers asked human subjects to smile forcibly for 20 seconds while tensing facial muscles, notably the muscles around the eyes called the orbicularis oculi (which create “crow’s feet”). They found that this action stimulated brain activity associated with positive emotions.
  • gratitude stimulates the hypothalamus (a key part of the brain that regulates stress) and the ventral tegmental area (part of our “reward circuitry” that produces the sensation of pleasure).
  • In the slightly more elegant language of the Stoic philosopher Epictetus, “He is a man of sense who does not grieve for what he has not, but rejoices in what he has.”
  • In addition to building our own happiness, choosing gratitude can also bring out the best in those around us
  • when their competence was questioned, the subjects tended to lash out with aggression and personal denigration. When shown gratitude, however, they reduced the bad behavior. That is, the best way to disarm an angry interlocutor is with a warm “thank you.”
  • A new study in the Journal of Consumer Psychology finds evidence that people begin to crave sweets when they are asked to express gratitude.
  • There are concrete strategies that each of us can adopt. First, start with “interior gratitude,” the practice of giving thanks privately
  • he recommends that readers systematically express gratitude in letters to loved ones and colleagues. A disciplined way to put this into practice is to make it as routine as morning coffee. Write two short emails each morning to friends, family or colleagues, thanking them for what they do.
  • Finally, be grateful for useless things
  • think of the small, useless things you experience — the smell of fall in the air, the fragment of a song that reminds you of when you were a kid. Give thanks.
8More

Do Political Experts Know What They're Talking About? | Wired Science | Wired... - 1 views

  • I often joke that every cable news show should be forced to display a disclaimer, streaming in a loop at the bottom of the screen. The disclaimer would read: “These talking heads have been scientifically proven to not know what they are talking about. Their blather is for entertainment purposes only.” The viewer would then be referred to Tetlock’s most famous research project, which began in 1984.
  • He picked a few hundred political experts – people who made their living “commenting or offering advice on political and economic trends” – and began asking them to make predictions about future events. He had a long list of pertinent questions. Would George Bush be re-elected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought process, so that he could better understand how they made up their minds.
  • Most of Tetlock’s questions had three possible answers; the pundits, on average, selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals. These results are summarized in his excellent Expert Political Judgment.
  • ...5 more annotations...
  • Some experts displayed a top-down style of reasoning: politics as a deductive art. They started with a big-idea premise about human nature, society, or economics and applied it to the specifics of the case. They tended to reach more confident conclusions about the future. And the positions they reached were easier to classify ideologically: that is the Keynesian prediction and that is the free-market fundamentalist prediction and that is the worst-case environmentalist prediction and that is the best case technology-driven growth prediction etc. Other experts displayed a bottom-up style of reasoning: politics as a much messier inductive art. They reached less confident conclusions and they are more likely to draw on a seemingly contradictory mix of ideas in reaching those conclusions (sometimes from the left, sometimes from the right). We called the big-idea experts “hedgehogs” (they know one big thing) and the more eclectic experts “foxes” (they know many, not so big things).
  • The most consistent predictor of consistently more accurate forecasts was “style of reasoning”: experts with the more eclectic, self-critical, and modest cognitive styles tended to outperform the big-idea people (foxes tended to outperform hedgehogs).
  • Lehrer: Can non-experts do anything to encourage a more effective punditocracy?
  • Tetlock: Yes, non-experts can encourage more accountability in the punditocracy. Pundits are remarkably skillful at appearing to go out on a limb in their claims about the future, without actually going out on one. For instance, they often “predict” continued instability and turmoil in the Middle East (predicting the present) but they virtually never get around to telling you exactly what would have to happen to disconfirm their expectations. They are essentially impossible to pin down. If pundits felt that their public credibility hinged on participating in level playing field forecasting exercises in which they must pit their wits against an extremely difficult-to-predict world, I suspect they would be learn, quite quickly, to be more flexible and foxlike in their policy pronouncements.
  • tweetmeme_style = 'compact'; Digg Stumble Upon Delicious Reddit if(typeof CN!=='undefined' && CN.dart){ CN.dart.call("blogsBody",{sz: "300x250", kws : ["bottom"]}); } Disqus Login About Disqus Like Dislike and 5 others liked this. Glad you liked it. Would you like to share? Facebook Twitter Share No thanks Sharing this page … Thanks! Close Login Add New Comment Post as … Image http://mediacdn.disqus.com/1312506743/build/system/upload.html#xdm_e=http%3A%2F%2Fwww.wired.com&xdm_c=default5471&xdm_p=1&f=wiredscience&t=do_political_experts_know_what_they8217re_talking_
6More

Of bears and biases: scientific judgment and the fate of Yellowstone's grizzlies | The ... - 0 views

  • In March, the U.S. Fish and Wildlife Service (USFWS) announced its intent to remove protections afforded by the U.S. Endangered Species Act (ESA) to grizzly bears in the Greater Yellowstone Ecosystem
  • However, conservation organizations oppose “delisting” GYE grizzlies. They cite persistent threats to grizzlies, public opposition to delisting and ongoing scientific uncertainty regarding the population’s viability. Indeed, scientific uncertainty, especially threats posed by a changing climate, is one reason a federal court reversed a similar decision back in 2009, returning federal protections to GYE grizzlies.
  • According to the ESA, decisions about the listing of species are to be made “solely on the basis of the best scientific and commercial data available.”
  • ...3 more annotations...
  • Numerous environmental statutes mandate that government agencies consider “the best available science” when making decisions. And agencies routinely consult with scientific experts to fullfill such mandates.
  • Such provisions work reasonably well when science offers clear and simple, black-and-white answers. But when there is uncertainty, is the expectation of scientific objectivity realistic?
  • To gain insight into what role bias may play in listing decisions, we surveyed a group of grizzly bear researchers. We found that experts’ judgments were associated with a number of factors outside the “best commercial and scientific data,” including their professional affiliations and social norms. Furthermore, we found that while there is no consensus in the scientific community regarding the threats to grizzly bears, the majority of scientists support continued listing.
8More

Science and Truth - We're All in It Together - NYTimes.com - 1 views

  • Almost any article worth reading these days generates some version of this long tail of commentary. Depending on whether they are moderated, these comments can range from blistering flameouts to smart factual corrections to full-on challenges to the very heart of an article’s argument.
  • These days, the comments section of any engaging article is almost as necessary a read as the piece itself — if you want to know how insider experts received the article and how those outsiders processed the new
  • By now, readers understand that the definitive “copy” of any article is no longer the one on paper but the online copy, precisely because it’s the version that’s been read and mauled and annotated by readers.
  • ...5 more annotations...
  • The print edition of any article is little more than a trophy version, the equivalent of a diploma or certificate of merit — suitable for framing, not much else.
  • We call the fallout to any article the “comments,” but since they are often filled with solid arguments, smart corrections and new facts, the thing needs a nobler name. Maybe “gloss.” In the Middle Ages, students often wrote notes in the margins of well-regarded manuscripts. These glosses, along with other forms of marginalia, took on a life of their own, becoming their own form of knowledge, as important as, say, midrash is to Jewish scriptures. The best glosses were compiled into, of course, glossaries and later published
  • The truth is that every decent article now aspires to become the wiki of its own headline.
  • t any good article that has provoked a real discussion typically comes with a small box of post-publication notes. And, since many magazines are naming the editor of the article as well as the author, the outing of the editor can come with a new duty: writing the bottom note that reviews the emendations to the article and perhaps, most importantly, summarizes the thrust of the discussion. If the writer gains the glory of the writing, the editor can win the credit for chaperoning the best and most provocative pieces.
  • Some may fear that recognizing the commentary of every article will turn every subject into an endless postmodern discussion. But actually, the opposite is true. Recognizing the gloss allows us to pause in the seemingly unending back and forth of contemporary free speech and free inquiry to say, well, for now, this much is true — the ivory-bill still hasn’t been definitively seen since World War II, climate change is happening and caused by mankind, natural selection is the best description of nature’s creative force. Et cetera.
11More

Atul Gawande: Failure and Rescue : The New Yorker - 0 views

  • the critical skills of the best surgeons I saw involved the ability to handle complexity and uncertainty. They had developed judgment, mastery of teamwork, and willingness to accept responsibility for the consequences of their choices. In this respect, I realized, surgery turns out to be no different than a life in teaching, public service, business, or almost anything you may decide to pursue. We all face complexity and uncertainty no matter where our path takes us. That means we all face the risk of failure. So along the way, we all are forced to develop these critical capacities—of judgment, teamwork, and acceptance of responsibility.
  • people admonish us: take risks; be willing to fail. But this has always puzzled me. Do you want a surgeon whose motto is “I like taking risks”? We do in fact want people to take risks, to strive for difficult goals even when the possibility of failure looms. Progress cannot happen otherwise. But how they do it is what seems to matter. The key to reducing death after surgery was the introduction of ways to reduce the risk of things going wrong—through specialization, better planning, and technology.
  • there continue to be huge differences between hospitals in the outcomes of their care. Some places still have far higher death rates than others. And an interesting line of research has opened up asking why.
  • ...8 more annotations...
  • I thought that the best places simply did a better job at controlling and minimizing risks—that they did a better job of preventing things from going wrong. But, to my surprise, they didn’t. Their complication rates after surgery were almost the same as others. Instead, what they proved to be really great at was rescuing people when they had a complication, preventing failures from becoming a catastrophe.
  • this is what distinguished the great from the mediocre. They didn’t fail less. They rescued more.
  • This may in fact be the real story of human and societal improvement. We talk a lot about “risk management”—a nice hygienic phrase. But in the end, risk is necessary. Things can and will go wrong. Yet some have a better capacity to prepare for the possibility, to limit the damage, and to sometimes even retrieve success from failure.
  • When things go wrong, there seem to be three main pitfalls to avoid, three ways to fail to rescue. You could choose a wrong plan, an inadequate plan, or no plan at all. Say you’re cooking and you inadvertently set a grease pan on fire. Throwing gasoline on the fire would be a completely wrong plan. Trying to blow the fire out would be inadequate. And ignoring it—“Fire? What fire?”—would be no plan at all.
  • All policies court failure—our war in Iraq, for instance, or the effort to stimulate our struggling economy. But when you refuse to even acknowledge that things aren’t going as expected, failure can become a humanitarian disaster. The sooner you’re able to see clearly that your best hopes and intentions have gone awry, the better. You have more room to pivot and adjust. You have more of a chance to rescue.
  • But recognizing that your expectations are proving wrong—accepting that you need a new plan—is commonly the hardest thing to do. We have this problem called confidence. To take a risk, you must have confidence in yourself
  • Yet you cannot blind yourself to failure, either. Indeed, you must prepare for it. For, strangely enough, only then is success possible.
  • So you will take risks, and you will have failures. But it’s what happens afterward that is defining. A failure often does not have to be a failure at all. However, you have to be ready for it—will you admit when things go wrong? Will you take steps to set them right?—because the difference between triumph and defeat, you’ll find, isn’t about willingness to take risks. It’s about mastery of rescue.
8More

What Is College For? - NYTimes.com - 0 views

  • 74 percent of graduates from four-year colleges say that their education was “very useful in helping them grow intellectually.”
  • When, as is often the case in business education and teacher training, practical skills far outweigh theoretical understanding, we are moving beyond the intellectual culture that defines higher education
  • This lack of academic engagement is real, even among schools with the best students and the best teachers, and it increases dramatically as the quality of the school decreases.  But it results from a basic misunderstanding — by both students and teachers — of what colleges are for.
  • ...5 more annotations...
  • First of all, they are not simply for the education of students.  This is an essential function, but the raison d’être of a college is to nourish a world of intellectual culture; that is, a world of ideas, dedicated to what we can know scientifically, understand humanistically, or express artistically.  In our society, this world is mainly populated by members of college faculties: scientists, humanists, social scientists (who straddle the humanities and the sciences properly speaking), and those who study the fine arts. Law, medicine and engineering are included to the extent that they are still understood as “learned professions,” deploying practical skills that are nonetheless deeply rooted in scientific knowledge or humanistic understanding
  • there are serious concerns about the quality of this experience.  In particular, the university curriculum leaves students disengaged from the material they are supposed to be learning.  They see most of their courses as intrinsically “boring,” of value only if they provide training relevant to future employment or if the teacher has a pleasing (amusing, exciting, “relevant”) way of presenting the material. As a result, students spend only as much time as they need to get what they see as acceptable grades (on average, about 12 to 14 hour a week for all courses combined).  Professors have ceased to expect genuine engagement from students and often give good grades (B or better) to work that is at best minimally adequate.
  • Our support for higher education makes sense only if we regard this intellectual culture as essential to our society
  • This has important consequences for how we regard what goes on in college classrooms.  Teachers need to see themselves as, first of all, intellectuals, dedicated to understanding poetry, history, human psychology, physics, biology — or whatever is the focus of their discipline.  But they also need to realize that this dedication expresses not just their idiosyncratic interest in certain questions but a conviction that those questions have general human significance, even apart from immediately practical applications.  This is why a discipline requires not just research but also teaching
  • Students, in turn, need to recognize that their college education is above all a matter of opening themselves up to new dimensions of knowledge and understanding.  Teaching is not a matter of (as we too often say) “making a subject (poetry, physics, philosophy) interesting” to students but of students coming to see how such subjects are intrinsically interesting.  It is more a matter of students moving beyond their interests than of teachers fitting their subjects to interests that students already have.   Good teaching does not make a course’s subject more interesting; it gives the students more interests — and so makes them more interesting.
2More

Can Dostoevsky's "Notes from Underground" Still Kick You in the Gut? : The New Yorker - 0 views

  • You can easily imagine what Dostoevsky would make of modern sociology, psychology, advertising techniques, war games, polling of any sort. What’s wrong with such techniques, in both their cynical or ameliorative uses, was simply stated by Sartre, in 1945: “All materialist philosophies create man as an object, a stone.” The underground man says that, on the contrary, human beings are unfathomable, unknowable.
  • Predictors of human behavior, as the underground man says, generally assume we will act in our own best interests. But do we? The same question might be asked today, when “rational-choice theory” is still a predictive model for economists and sociologists and many others. When working-class whites vote for Republican policies that will further reduce their economic power—are they voting in their best interests? What about wealthy liberals in favor of higher taxes on the rich? Do people making terrible life choices—say, poor women having children with unreliable men—act in their best interests? Do they calculate at all? What if our own interest, as we construe it, consists of refusing what others want of us? That motive can’t be measured. It can’t even be known, except by novelists like Dostoevsky. Reason is only one part of our temperament, the underground man says. Individualism as a value includes the right to screw yourself up.
7More

How can we best assess the neuropsychological effects of violent video game play? | Pet... - 0 views

  • Every time a research paper about violent video games makes it into the news, it feels like we’re in a time loop. Any claims that the study makes about the potential positive or (usually) negative effects of playing games tend to get over-egged to the point of ridiculousness.
  • At best, the measures of aggression that are used in such work are unstandardised; at worst, the field has been shown to be riddled with basic methodological and analytical flaws. These problems are further compounded by entrenched ideologies and a reluctance from some researchers to even talk to their ‘adversaries’, let alone discuss the potential for adversarial collaborations
  • All of this means that we’re stuck at an impasse with violent video games research; it feels like we’re no more clued up on what the actual behavioural effects are now than, say, five or ten years ago.
  • ...4 more annotations...
  • In stage 1, they submit the introduction, methods, proposed analysis, and if necessary, pilot data. This manuscript then goes through the usual peer review process, and is assessed on criteria such as the soundness of the methods and analysis, and overall plausibility of the stated hypotheses.
  • Once researchers have passed through stage 1, they can then move on to data collection. In stage 2, they then submit the full manuscript – the introduction and agreed methods from stage 1, plus results and discussion sections. The results must include the outcome of the analyses agreed in stage 1, but the researchers are allowed to include additional analyses in a separate, ‘exploratory’ section (as long as they are justified).
  • Pre-registering scientific articles in this way helps to protect against a number of undesirable practices (such as p-hacking and HARKing) that can exaggerate statistical findings and make non-existent effects seem real. While this is a problem across psychology generally, it is a particularly extreme problem for violent video game research.
  • By outlining the intended methods and analysis protocols beforehand, Registered Reports protect against these problems, as the review process concentrates on the robustness of the proposed methods. And Registered Reports offer an additional advantage: because manuscripts are never accepted based on the outcome of the data analysis, the process is immune to researcher party lines. It doesn’t matter which research ‘camp’ you are in; your data – and just as importantly, your methods - will speak for themselves.
10More

Shady Science: How the Brain Remembers Colors - 0 views

  • When you bring home the wrong color of paint from the hardware store, it may not be your foggy memory at fault
  • Flombaum and his colleagues conducted four experiments on four different groups of people.
  • while the human brain can distinguish between millions of colors, it has difficulty remembering specific shades.
  • ...7 more annotations...
  • The exercise was designed to find the perceived boundaries between colors, the researchers said
  • scientists showed different people the same colors, but this time they asked them to find the "best example" of a particular color.
  • researchers showed participants colored squares, and asked them to select the best match on the color wheel. In a fourth experiment, another group of participants completed the same task, but there was a delay of 90 milliseconds between when each color square was displayed and when they were asked to select the best match on the color wheel.
  • This tendency to lump colors together could explain why it's so hard to match the color of house paint based on memory alone, the researchers said
  • categories are indeed important in how people identify and remember colors.
  • participants who were asked to name the colors reliably saw five hues: blue, yellow, pink, purple and green
  • "Where that fuzzy naming happened, those are the boundaries"
14More

WHICH IS THE BEST LANGUAGE TO LEARN? | More Intelligent Life - 2 views

  • For language lovers, the facts are grim: Anglophones simply aren’t learning them any more. In Britain, despite four decades in the European Union, the number of A-levels taken in French and German has fallen by half in the past 20 years, while what was a growing trend of Spanish-learning has stalled. In America, the numbers are equally sorry.
  • compelling reasons remain for learning other languages.
  • First of all, learning any foreign language helps you understand all language better—many Anglophones first encounter the words “past participle” not in an English class, but in French. Second, there is the cultural broadening. Literature is always best read in the original. Poetry and lyrics suffer particularly badly in translation. And learning another tongue helps the student grasp another way of thinking.
  • ...11 more annotations...
  • is Chinese the language of the future?
  • So which one should you, or your children, learn? If you take a glance at advertisements in New York or A-level options in Britain, an answer seems to leap out: Mandarin.
  • The practical reasons are just as compelling. In business, if the team on the other side of the table knows your language but you don’t know theirs, they almost certainly know more about you and your company than you do about them and theirs—a bad position to negotiate from.
  • If you were to learn ten languages ranked by general usefulness, Japanese would probably not make the list. And the key reason for Japanese’s limited spread will also put the brakes on Chinese.
  • This factor is the Chinese writing system (which Japan borrowed and adapted centuries ago). The learner needs to know at least 3,000-4,000 characters to make sense of written Chinese, and thousands more to have a real feel for it. Chinese, with all its tones, is hard enough to speak. But  the mammoth feat of memory required to be literate in Mandarin is harder still. It deters most foreigners from ever mastering the system—and increasingly trips up Chinese natives.
  • A recent survey reported in the People’s Daily found 84% of respondents agreeing that skill in Chinese is declining.
  • Fewer and fewer native speakers learn to produce characters in traditional calligraphy. Instead, they write their language the same way we do—with a computer. And not only that, but they use the Roman alphabet to produce Chinese characters: type in wo and Chinese language-support software will offer a menu of characters pronounced wo; the user selects the one desired. (Or if the user types in wo shi zhongguo ren, “I am Chinese”, the software detects the meaning and picks the right characters.) With less and less need to recall the characters cold, the Chinese are forgetting them
  • As long as China keeps the character-based system—which will probably be a long time, thanks to cultural attachment and practical concerns alike—Chinese is very unlikely to become a true world language, an auxiliary language like English, the language a Brazilian chemist will publish papers in, hoping that they will be read in Finland and Canada. By all means, if China is your main interest, for business or pleasure, learn Chinese. It is fascinating, and learnable—though Moser’s online essay, “Why Chinese is so damn hard,” might discourage the faint of heart and the short of time.
  • But if I was asked what foreign language is the most useful, and given no more parameters (where? for what purpose?), my answer would be French. Whatever you think of France, the language is much less limited than many people realise.
  • French ranks only 16th on the list of languages ranked by native speakers. But ranked above it are languages like Telegu and Javanese that no one would call world languages. Hindi does not even unite India. Also in the top 15 are Arabic, Spanish and Portuguese, major languages to be sure, but regionally concentrated. If your interest is the Middle East or Islam, by all means learn Arabic. If your interest is Latin America, Spanish or Portuguese is the way to go. Or both; learning one makes the second quite easy.
  • if you want another truly global language, there are surprisingly few candidates, and for me French is unquestionably top of the list. It can enhance your enjoyment of art, history, literature and food, while giving you an important tool in business and a useful one in diplomacy. It has native speakers in every region on earth. And lest we forget its heartland itself, France attracts more tourists than any other country—76.8m in 2010, according to the World Tourism Organisation, leaving America a distant second with 59.7m
16More

Why a Harvard Professor Has Mixed Feelings When Students Take Jobs in Finance - NYTimes... - 0 views

  • Many of the best students are not going to research cancer, teach and inspire the next generation, or embark on careers in public service. Instead, large numbers are becoming traders, brokers and bankers. At Harvard in 2014, nearly one in five students who took a job went to finance. For economics majors, the number was closer to one in two.
  • arbitrage is valuable only to a point. It has a gold rush element with prospectors racing to get to the gold first. While finding gold has value, finding gold before someone else does is mainly rent-seeking.
  • I wonder: Is this a good decision for society as a whole?
  • ...13 more annotations...
  • As an economist, I look at it this way: Every profession produces both private returns — the fruits of labor that a person enjoys — and social returns — those that society enjoys. If I set up a shop on Etsy selling photographs, my private returns may be defined as the revenue I generate. The social returns are the pleasure that my photographs provide to my customers.
  • People in some professions provide a surplus of social returns. Inventors are a good example. Take the modern semiconductor. It made possible countless other inventions — nearly every piece of computing we interact with today.
  • countries suffer when talented people become what we economists call “rent seekers.” Instead of creating wealth, rent seekers simply transfer it — from others to themselves.
  • Job titles don’t tell you whether someone is primarily a rent seeker. A lawyer who helps draft precise contracts may actually be helping the wheels of commerce turn, and so creating wealth. But trial lawyers in a country with poorly functioning tort systems may simply be extracting rents: They can make money by pursuing frivolous lawsuits.
  • In this respect, finance is a vexing industry.
  • I can’t help wondering: Is this the best use of talent?
  • Booth, have shown in a study how extreme this financial gold rush has become in at least one corner of the financial world. From 2005 to 2011, they found that the duration of arbitrage opportunities in the Chicago Mercantile Exchange and the New York Stock Exchange declined from a median of 97 milliseconds to seven milliseconds
  • correcting mispricing at this speed is unlikely to have any real social benefit: What serious investment is being guided by prices at the millisecond level? Short-term arbitrage, while lucrative, seems to be mainly rent-seeking.
  • This kind of rent-seeking behavior is widespread in other parts of finance. Banks sometimes make money by using hidden fees rather than adding true value. Debt collection agencies may use unscrupulous practices. Lenders to poor people buying used cars can make profits with business models that encourage high rates of default — making money by taking advantage of people’s overconfidence about what cars they can afford and by repossessing vehicles. These kinds of practices may be both lucrative — and socially pernicious.
  • The poor face a tremendous problem every day juggling money and expenses. Their pay often fluctuates week by week, yet they must pay rent no matter what they earn. Right now, poor people often use expensive payday loans or must incur expensive late fees.
  • Surely we could do better. Finding ways to smooth out these shocks is the kind of important, socially valuable problem that finance could solve. Many other crucial social problems have finance at their root, from saving for college to insuring unemployment risk.
  • Instead of finding clever ways to hide fees, banking innovations could solve these real and important problems.
  • So how should I feel about my students going into finance? I hope they realize that they have the potential to do great good and not simply make money. It may not be how the industry is structured now, but idealism and inventiveness are two of the best traits of youth, and finance especially could use them.
10More

Which Type of Exercise Is Best for the Brain? - The New York Times - 1 views

  • Some forms of exercise may be much more effective than others at bulking up the brain, according to a remarkable new study in rats. For the first time, scientists compared head-to-head the neurological impacts of different types of exercise: running, weight training and high-intensity interval training. The surprising results suggest that going hard may not be the best option for long-term brain health.
  • exercise changes the structure and function of the brain. Studies in animals and people have shown that physical activity generally increases brain volume and can reduce the number and size of age-related holes in the brain’s white and gray matter.
  • Exercise also, and perhaps most resonantly, augments adult neurogenesis, which is the creation of new brain cells in an already mature brain. In studies with animals, exercise, in the form of running wheels or treadmills, has been found to double or even triple the number of new neurons that appear afterward in the animals’ hippocampus, a key area of the brain for learning and memory, compared to the brains of animals that remain sedentary. Scientists believe that exercise has similar impacts on the human hippocampus.
  • ...7 more annotations...
  • These past studies of exercise and neurogenesis understandably have focused on distance running. Lab rodents know how to run. But whether other forms of exercise likewise prompt increases in neurogenesis has been unknown and is an issue of increasing interest
  • new study, which was published this month in the Journal of Physiology, researchers at the University of Jyvaskyla in Finland and other institutions gathered a large group of adult male rats. The researchers injected the rats with a substance that marks new brain cells and then set groups of them to an array of different workouts, with one group remaining sedentary to serve as controls.
  • They found very different levels of neurogenesis, depending on how each animal had exercised. Those rats that had jogged on wheels showed robust levels of neurogenesis. Their hippocampal tissue teemed with new neurons, far more than in the brains of the sedentary animals. The greater the distance that a runner had covered during the experiment, the more new cells its brain now contained. There were far fewer new neurons in the brains of the animals that had completed high-intensity interval training. They showed somewhat higher amounts than in the sedentary animals but far less than in the distance runners. And the weight-training rats, although they were much stronger at the end of the experiment than they had been at the start, showed no discernible augmentation of neurogenesis. Their hippocampal tissue looked just like that of the animals that had not exercised at all.
  • “sustained aerobic exercise might be most beneficial for brain health also in humans.”
  • Just why distance running was so much more potent at promoting neurogenesis than the other workouts is not clear, although Dr. Nokia and her colleagues speculate that distance running stimulates the release of a particular substance in the brain known as brain-derived neurotrophic factor that is known to regulate neurogenesis. The more miles an animal runs, the more B.D.N.F. it produces. Weight training, on the other hand, while extremely beneficial for muscular health, has previously been shown to have little effect on the body’s levels of B.D.N.F.
  • As for high-intensity interval training, its potential brain benefits may be undercut by its very intensity, Dr. Nokia said. It is, by intent, much more physiologically draining and stressful than moderate running, and “stress tends to decrease adult hippocampal neurogenesis,” she said.
  • These results do not mean, however, that only running and similar moderate endurance workouts strengthen the brain, Dr. Nokia said. Those activities do seem to prompt the most neurogenesis in the hippocampus. But weight training and high-intensity intervals probably lead to different types of changes elsewhere in the brain. They might, for instance, encourage the creation of additional blood vessels or new connections between brain cells or between different parts of the brain.
15More

How Reliable Are the Social Sciences? - NYTimes.com - 3 views

  • media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention. But this is hardly a reasonable view.  There is considerable distance between, say, the confidence we should place in astronomers’ calculations of eclipses and a small marketing study suggesting that consumers prefer laundry soap in blue boxes
  • A rational assessment of a scientific result must first take account of the broader context of the particular science involved.  Where does the result lie on the continuum from preliminary studies, designed to suggest further directions of research, to maximally supported conclusions of the science?
  • Second, and even more important, there is our overall assessment of work in a given science in comparison with other sciences.
  • ...12 more annotations...
  • The core natural sciences (e.g., physics, chemistry, biology) are so well established that we readily accept their best-supported conclusions as definitive.
  • Even the best-developed social sciences like economics have nothing like this status.
  • when it comes to generating reliable scientific knowledge, there is nothing more important than frequent and detailed predictions of future events.  We may have a theory that explains all the known data, but that may be just the result of our having fitted the theory to that data.  The strongest support for a theory comes from its ability to correctly predict data that it was not designed to explain.
  • The case for a negative answer lies in the predictive power of the core natural sciences compared with even the most highly developed social sciences
  • Is there any work on the effectiveness of teaching that is solidly enough established to support major policy decisions?
  • While the physical sciences produce many detailed and precise predictions, the social sciences do not. 
  • most social science research falls far short of the natural sciences’ standard of controlled experiments.
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive.
  • Because of the many interrelated causes at work in social systems, many questions are simply “impervious to experimentation.”
  • even when we can get reliable experimental results, the causal complexity restricts us to “extremely conditional, statistical statements,” which severely limit the range of cases to which the results apply.
  • above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
100More

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-P... - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
1More

Expert Writers Center | Best Online-based Writing Company - 0 views

  •  
    The best online based writing company. Helpful in meeting any student's writing needs.
30More

Sex, Morality, and Modernity: Can Immanuel Kant Unite Us? - The Atlantic - 1 views

  • Before I jump back into the conversation about sexual ethics that has unfolded on the Web in recent days, inspired by Emily Witt's n+1 essay "What Do You Desire?" and featuring a fair number of my favorite writers, it's worth saying a few words about why I so value debate on this subject, and my reasons for running through some sex-life hypotheticals near the end of this article.
  • As we think and live, the investment required to understand one another increases. So do the stakes of disagreeing. 18-year-olds on the cusp of leaving home for the first time may disagree profoundly about how best to live and flourish, but the disagreements are abstract. It is easy, at 18, to express profound disagreement with, say, a friend's notions of child-rearing. To do so when he's 28, married, and raising a son or daughter is delicate, and perhaps best avoided
  • I have been speaking of friends. The gulfs that separate strangers can be wider and more difficult to navigate because there is no history of love and mutual goodwill as a foundation for trust. Less investment has been made, so there is less incentive to persevere through the hard parts.
  • ...27 more annotations...
  • I've grown very close to new people whose perspectives are radically different than mine.
  • It floors me: These individuals are all repositories of wisdom. They've gleaned it from experiences I'll never have, assumptions I don't share, and brains wired different than mine. I want to learn what they know.
  • Does that get us anywhere? A little ways, I think.
  • "Are we stuck with a passé traditionalism on one hand, and total laissez-faire on the other?" Is there common ground shared by the orthodox-Christian sexual ethics of a Rod Dreher and those who treat consent as their lodestar?
  • Gobry suggests that Emmanuel Kant provides a framework everyone can and should embrace, wherein consent isn't nearly enough to make a sexual act moral--we must, in addition, treat the people in our sex lives as ends, not means.
  • Here's how Kant put it: "Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end."
  • the disappearance of a default sexual ethic in America and the divergence of our lived experiences means we have more to learn from one another than ever, even as our different choices raise the emotional stakes.
  • Nor does it seem intuitively obvious that a suffering, terminally ill 90-year-old is regarding himself as a means, or an object, if he prefers to end his life with a lethal injection rather than waiting three months in semi-lucid agony for his lungs to slowly shut down and suffocate him. (Kant thought suicide impermissible.) The terminally ill man isn't denigrating his own worth or the preciousness of life or saying it's permissible "any time" it is difficult. He believes ending his life is permissible only because the end is nigh, and the interim affords no opportunity for "living" in anything except a narrow biological sense.
  • It seems to me that, whether we're talking about a three-week college relationship or a 60-year marriage, it is equally possible to treat one's partner as a means or as an end (though I would agree that "treating as means" is more common in hookups than marriage)
  • my simple definition is this: It is wrong to treat human persons in such a way that they are reduced to objects. This says nothing about consent: a person may consent to be used as an object, but it is still wrong to use them that way. It says nothing about utility: society may approve of using some people as objects; whether those people are actual slaves or economically oppressed wage-slaves it is still wrong to treat them like objects. What it says, in fact, is that human beings have intrinsic worth and dignity such that treating them like objects is wrong.
  • what it means to treat someone as a means, or as an object, turns out to be in dispute.
  • Years ago, I interviewed a sister who was acting as a surrogate for a sibling who couldn't carry her own child. The notion that either regarded the other (or themselves) as an object seems preposterous to me. Neither was treating the other as a means, because they both freely chose, desired and worked in concert to achieve the same end.
  • It seems to me that the Kantian insight is exactly the sort of challenge traditionalist Christians should make to college students as they try to persuade them to look more critically at hookup culture. I think a lot of college students casually mislead one another about their intentions and degree of investment, feigning romantic interest when actually they just want to have sex. Some would say they're transgressing against consent. I think Kant has a more powerful challenge. 
  • Ultimately, Kant only gets us a little way in this conversation because, outside the realm of sex, he thinks consent goes a long way toward mitigating the means problem, whereas in the realm of sex, not so much. This is inseparable from notions he has about sex that many of us just don't share.
  • two Biblical passages fit my moral intuition even better than Kant. "Love your neighbor as yourself." And "therefore all things whatsoever would that men should do to you, do ye even so to them.
  • "do unto others..." is extremely demanding, hard to live up to, and a very close fit with my moral intuitions.
  • "Do unto others" is also enough to condemn all sorts of porn, and to share all sorts of common ground with Dreher beyond consent. Interesting that it leaves us with so many disagreements too. "Do unto others" is core to my support for gay marriage.
  • Are our bones always to be trusted?) The sexual behavior parents would be mortified by is highly variable across time and cultures. So how can I regard it as a credible guide of inherent wrong? Professional football and championship boxing are every bit as violent and far more physically damaging to their participants than that basement scene, yet their cultural familiarity is such that most people don't feel them to be morally suspect. Lots of parents are proud, not mortified, when a son makes the NFL.
  • "Porn operates in fantasy the way boxing and football operate in fantasy. The injuries are quite real." He is, as you can see, uncomfortable with both. Forced at gunpoint to choose which of two events could proceed on a given night, an exact replica of the San Francisco porn shoot or an Ultimate Fighting Championship tournament--if I had to shut one down and grant the other permission to proceed--what would the correct choice be?
  • insofar as there is something morally objectionable here, it's that the audience is taking pleasure in the spectacle of someone being abused, whether that abuse is fact or convincing illusion. Violent sports and violent porn interact with dark impulses in humanity, as their producers well know.
  • If Princess Donna was failing to "do unto others" at all, the audience was arguably who she failed. Would she want others to entertain her by stoking her dark human impulses? Then again, perhaps she is helping to neuter and dissipate them in a harmless way. That's one theory of sports, isn't it? We go to war on the gridiron as a replacement for going to war? And the rise in violent porn has seemed to coincide with falling, not rising, incidence of sexual violence. 
  • On all sorts of moral questions I can articulate confident judgments. But I am confident in neither my intellect nor my gut when it comes to judging Princess Donna, or whether others are transgressing against themselves or "nature" when doing things that I myself wouldn't want to do. Without understanding their mindset, why they find that thing desirable, or what it costs them, if anything, I am loath to declare that it's grounded in depravity or inherently immoral just because it triggers my disgust instinct, especially if the people involved articulate a plausible moral code that they are following, and it even passes a widely held standard like "do unto others."
  • Here's another way to put it. Asked to render moral judgments about sexual behaviors, there are some I would readily label as immoral. (Rape is an extreme example. Showing the topless photo your girlfriend sent to your best friend is a milder one.) But I often choose to hold back and error on the side of not rendering a definitive judgment, knowing that occasionally means I'll fail to label as unethical some things that actually turn out to be morally suspect.
  • Partly I take that approach because, unlike Dreher, I don't see any great value or urgency in the condemnations, and unlike Douthat, I worry more about wrongful stigma than lack of rightful stigmas
  • In a society where notions of sexual morality aren't coercively enforced by the church or the state, what purpose is condemnation serving?
  • People are great! Erring on the side of failing to condemn permits at least the possibility of people from all of these world views engaging in conversation with one another.
  • Dreher worries about the fact that, despite our discomfort, neither Witt nor I can bring ourselves to say that the sexual acts performed during the S.F. porn shoot were definitely wrong. Does that really matter? My interlocutors perhaps see a cost more clearly than me, as well they might. My bias is that just arguing around the fire is elevating.
17More

The best time of day - and year - to work most effectively - The Washington Post - 0 views

  • Some of us are larks -- some of us are owls. But if you look at distribution, most of us are a little bit of both — what I call “third birds.”
  • There's a period of day when we’re at our peak, and that's best for doing analytic tasks things like writing a report or auditing a financial statement. There's the trough, which is the dip -- that’s not good for anything. And then there’s recovery, which is less optimal, but we do better at insight and creativity tasks.
  • the bigger issue here is that we have thought of "when" as a second order question. We take questions of how we do things, what we do, and who I do it with very seriously, but we stick the "when" questions over at the kids’ table.
  • ...14 more annotations...
  • What is it about a new year? How does our psychology influence how we think about that and making fresh starts? We do what social psychologists call temporal accounting -- that is, we have a ledger in our head of how we are spending our time. What we’re trying to do, in some cases, is relegate our previous selves to the past: This year we’re going to do a lot better.
  • breaks are much more important than we realize.
  • Many hard-core workplaces think of breaks as a deviation from performance, when in fact the science of breaks tells us they’re a part of performance.
  • Research shows us that social breaks are better than solo breaks -- taking a break with somebody else is more restorative than doing it on your own. A break that involves movement is better than a stationary one. And then there's the restorative power in nature. Simply going outside outside rather than being inside, simply being able to look out a window during a break is better. And there's the importance of being fully detached,
  • Every day I write down two breaks that I’m going to take. I make a 'break list,' and I try to treat them with the same reverence with which I’d treat scheduled meetings. We would never skip a meeting.
  • One of the issues you explore is when it pays to go first — whether you’re up for a competitive pitch or trying to get a job. When is it good to go first
  • Here’s where you should go first: If you’re not the default choice
  • If you are the default choice, you’re better off not going first. What happens is that early in a process, people are more likely to be open-minded, to challenge assumptions. But over time, they wear out, and they’re more likely to go with the default choice.
  • Also, if you’re operating in an uncertain environment -- and this is actually really important -- where the criteria for selections are not fully fully sharp, you’re better off going at the end. In the beginning, the judges are still trying to figure out what they want.
  • In fact, what researchers have found is that at the beginning, project teams pretty much do nothing. They bicker, they dicker. Yet astonishingly, many project teams she followed ended up really getting started in earnest at the exact midpoint. If you give a team 34 days, they’ll get started in earnest on day 17. This is actually a big shift in the way organizational scholars thought about how teams work.
  • There are two key things a leader can do at a midpoint. One is to identify it to make it salient: Say "ok guys, it’s day 17 of this 35 day project. We better get going."
  • The second comes from research on basketball. It shows that when teams are ahead at the midpoint, they get complacent. When they’re way behind at the midpoint, they get demoralized. But when they’re a little behind, it can be galvanizing. So what leaders can do is suggest hey, we’re a little bit behind.
  • When you're giving feedback to employees, should you give good news or bad news first?
  • If you ask people what they prefer, four out of five prefer getting the bad news first. The reason has to do with endings. Given the choice, human beings prefer endings that elevate. We prefer endings that go up, that have a rising sequence rather than a declining sequence.
75More

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
40More

Why Study History? (1985) | AHA - 0 views

  • Isn't there quite enough to learn about the world today? Why add to the burden by looking at the past
  • Historical knowledge is no more and no less than carefully and critically constructed collective memory. As such it can both make us wiser in our public choices and more richly human in our private lives.
  • Without individual memory, a person literally loses his or her identity, and would not know how to act in encounters with others. Imagine waking up one morning unable to tell total strangers from family and friends!
  • ...37 more annotations...
  • Collective memory is similar, though its loss does not immediately paralyze everyday private activity. But ignorance of history-that is, absent or defective collective memory-does deprive us of the best available guide for public action, especially in encounters with outsider
  • Often it is enough for experts to know about outsiders, if their advice is listened to. But democratic citizenship and effective participation in the determination of public policy require citizens to share a collective memory, organized into historical knowledge and belief
  • This value of historical knowledge obviously justifies teaching and learning about what happened in recent times, for the way things are descends from the way they were yesterday and the day before that
  • in fact, institutions that govern a great deal of our everyday behavior took shape hundreds or even thousands of years ago
  • Only an acquaintance with the entire human adventure on earth allows us to understand these dimensions of contemporary reality.
  • it follows that study of history is essential for every young person.
  • Collective memory is quite the same. Historians are always at work reinterpreting the past, asking new questions, searching new sources and finding new meanings in old documents in order to bring the perspective of new knowledge and experience to bear on the task of understanding the past.
  • what we know and believe about history is always changing. In other words, our collective, codified memory alters with time just as personal memories do, and for the same reasons.
  • skeptics are likely to conclude that history has no right to take student time from other subjects. If what is taught today is not really true, how can it claim space in a crowded school curriculum?
  • what if the world is more complicated and diverse than words can ever tell? What if human minds are incapable of finding' neat pigeon holes into which everything that happens will fit?
  • What if we have to learn to live with uncertainty and probabilities, and act on the basis of the best guesswork we are capable of?
  • Then, surely, the changing perspectives of historical understanding are the very best introduction we can have to the practical problems of real life. Then, surely, a serious effort to understand the interplay of change and continuity in human affairs is the only adequate introduction human beings can have to the confusing flow of events that constitutes the actual, adult world.
  • Memory is not something fixed and forever. As time passes, remembered personal experiences take on new meanings.
  • Early in this century, teachers and academic administrators pretty well agreed that two sorts of history courses were needed: a survey of the national history of the United States and a survey of European history.
  • Memory, indeed, makes us human. History, our collective memory, carefully codified and critically revised, makes us social, sharing ideas and ideals with others so as to form all sorts of different human groups
  • The varieties of history are enormous; facts and probabilities about the past are far too numerous for anyone to comprehend them all. Every sort of human group has its own histor
  • Where to start? How bring some sort of order to the enormous variety of things known and believed about the past?
  • Systematic sciences are not enough. They discount time, and therefore oversimplify reality, especially human reality.
  • This second course was often broadened into a survey of Western civilization in the 1930s and 1940s
  • But by the 1960s and 1970s these courses were becoming outdated, left behind by the rise of new kinds social and quantitative history, especially the history of women, of Blacks, and of other formerly overlooked groups within the borders of the United States, and of peoples emerging from colonial status in the world beyond our borders.
  • much harder to combine old with new to make an inclusive, judiciously balanced (and far less novel) introductory course for high school or college students.
  • But abandoning the effort to present a meaningful portrait of the entire national and civilizational past destroyed the original justification for requiring students to study history
  • Competing subjects abounded, and no one could or would decide what mattered most and should take precedence. As this happened, studying history became only one among many possible ways of spending time in school.
  • The costs of this change are now becoming apparent, and many concerned persons agree that returning to a more structured curriculum, in which history ought to play a prominent part, is imperative.
  • three levels of generality seem likely to have the greatest importance for ordinary people.
  • First is family, local, neighborhood history
  • Second is national history, because that is where political power is concentrated in our time.
  • Last is global history, because intensified communications make encounters with all the other peoples of the earth increasingly important.
  • Other pasts are certainly worth attention, but are better studied in the context of a prior acquaintance with personal-local, national, and global history. That is because these three levels are the ones that affect most powerfully what all other groups and segments of society actually do.
  • National history that leaves out Blacks and women and other minorities is no longer acceptable; but American history that leaves out the Founding Fathers and the Constitution is not acceptable either. What is needed is a vision of the whole, warts and all.
  • the study of history does not lead to exact prediction of future events. Though it fosters practical wisdom, knowledge of the past does not permit anyone to know exactly what is going to happen
  • Consequently, the lessons of history, though supremely valuable when wisely formulated, become grossly misleading when oversimplifiers try to transfer them mechanically from one age to another, or from one place to another.
  • Predictable fixity is simply not the human way of behaving. Probabilities and possibilities-together with a few complete surprises-are what we live with and must learn to expect.
  • Second, as acquaintance with the past expands, delight in knowing more and more can and often does become an end in itself.
  • On the other hand, studying alien religious beliefs, strange customs, diverse family patterns and vanished social structures shows how differently various human groups have tried to cop
  • Broadening our humanity and extending our sensibilities by recognizing sameness and difference throughout the recorded past is therefore an important reason for studying history, and especially the history of peoples far away and long ago
  • For we can only know ourselves by knowing how we resemble and how we differ from others. Acquaintance with the human past is the only way to such self knowledge.
9More

Best COVID-19 Vaccination Strategies, According To Mathematicians : Shots - Health News... - 1 views

  • Only a vaccine will save America from the COVID -19 pandemic. At least that's the opinion of nearly all public health officials.
  • But there's another group that plays a less obvious but still crucial role in making sure vaccines do what they're intended: mathematicians.
  • How best to use that limited supply is a question mathematicians can help answer.
  • ...6 more annotations...
  • They can help with decisions about who gets the vaccine first when supplies are limited.
  • "One of those is how much is the virus spreading as the vaccine is being rolled out? And another factor is. How fast is the vaccine being rolled out?"
  • It's also important to know how effective a vaccine is at preventing disease, how long protection lasts, and whether it not only prevents someone from getting sick but also from transmitting COVID-19.
  • Larremore says to end a pandemic, it generally makes sense to vaccinate those most capable of spreading disease.
  • But even if a mathematical model suggests the most effective path, it doesn't provide all the answers public health officials need.
  • Right now, modelers are trying to help public health officials decide if it makes sense to use a single dose of the Moderna and Pfizer vaccine to extend the limited supply, even though the vaccine has only really been tested using a two-dose regimen
‹ Previous 21 - 40 of 578 Next › Last »
Showing 20 items per page