Skip to main content

Home/ TOK@ISPrague/ Group items tagged a

Rss Feed Group items tagged

markfrankel18

The Moral Instinct - New York Times - 3 views

  • It seems we may all be vulnerable to moral illusions the ethical equivalent of the bending lines that trick the eye on cereal boxes and in psychology textbooks. Illusions are a favorite tool of perception scientists for exposing the workings of the five senses, and of philosophers for shaking people out of the naïve belief that our minds give us a transparent window onto the world (since if our eyes can be fooled by an illusion, why should we trust them at other times?). Today, a new field is using illusions to unmask a sixth sense, the moral sense.
  • The first hallmark of moralization is that the rules it invokes are felt to be universal. Prohibitions of rape and murder, for example, are felt not to be matters of local custom but to be universally and objectively warranted. One can easily say, “I don’t like brussels sprouts, but I don’t care if you eat them,” but no one would say, “I don’t like killing, but I don’t care if you murder someone.”The other hallmark is that people feel that those who commit immoral acts deserve to be punished.
  • Until recently, it was understood that some people didn’t enjoy smoking or avoided it because it was hazardous to their health. But with the discovery of the harmful effects of secondhand smoke, smoking is now treated as immoral. Smokers are ostracized; images of people smoking are censored; and entities touched by smoke are felt to be contaminated (so hotels have not only nonsmoking rooms but nonsmoking floors). The desire for retribution has been visited on tobacco companies, who have been slapped with staggering “punitive damages.” At the same time, many behaviors have been amoralized, switched from moral failings to lifestyle choices.
  • ...10 more annotations...
  • But whether an activity flips our mental switches to the “moral” setting isn’t just a matter of how much harm it does. We don’t show contempt to the man who fails to change the batteries in his smoke alarms or takes his family on a driving vacation, both of which multiply the risk they will die in an accident. Driving a gas-guzzling Hummer is reprehensible, but driving a gas-guzzling old Volvo is not; eating a Big Mac is unconscionable, but not imported cheese or crème brûlée. The reason for these double standards is obvious: people tend to align their moralization with their own lifestyles.
  • People don’t generally engage in moral reasoning, Haidt argues, but moral rationalization: they begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.
  • Together, the findings corroborate Greene’s theory that our nonutilitarian intuitions come from the victory of an emotional impulse over a cost-benefit analysis.
  • The psychologist Philip Tetlock has shown that the mentality of taboo — a conviction that some thoughts are sinful to think — is not just a superstition of Polynesians but a mind-set that can easily be triggered in college-educated Americans. Just ask them to think about applying the sphere of reciprocity to relationships customarily governed by community or authority. When Tetlock asked subjects for their opinions on whether adoption agencies should place children with the couples willing to pay the most, whether people should have the right to sell their organs and whether they should be able to buy their way out of jury duty, the subjects not only disagreed but felt personally insulted and were outraged that anyone would raise the question.
  • The moral sense, then, may be rooted in the design of the normal human brain. Yet for all the awe that may fill our minds when we reflect on an innate moral law within, the idea is at best incomplete. Consider this moral dilemma: A runaway trolley is about to kill a schoolteacher. You can divert the trolley onto a sidetrack, but the trolley would trip a switch sending a signal to a class of 6-year-olds, giving them permission to name a teddy bear Muhammad. Is it permissible to pull the lever? This is no joke. Last month a British woman teaching in a private school in Sudan allowed her class to name a teddy bear after the most popular boy in the class, who bore the name of the founder of Islam. She was jailed for blasphemy and threatened with a public flogging, while a mob outside the prison demanded her death. To the protesters, the woman’s life clearly had less value than maximizing the dignity of their religion, and their judgment on whether it is right to divert the hypothetical trolley would have differed from ours. Whatever grammar guides people’s moral judgments can’t be all that universal. Anyone who stayed awake through Anthropology 101 can offer many other examples.
  • The impulse to avoid harm, which gives trolley ponderers the willies when they consider throwing a man off a bridge, can also be found in rhesus monkeys, who go hungry rather than pull a chain that delivers food to them and a shock to another monkey. Respect for authority is clearly related to the pecking orders of dominance and appeasement that are widespread in the animal kingdom. The purity-defilement contrast taps the emotion of disgust that is triggered by potential disease vectors like bodily effluvia, decaying flesh and unconventional forms of meat, and by risky sexual practices like incest.
  • All this brings us to a theory of how the moral sense can be universal and variable at the same time. The five moral spheres are universal, a legacy of evolution. But how they are ranked in importance, and which is brought in to moralize which area of social life — sex, government, commerce, religion, diet and so on — depends on the culture.
  • By analogy, we are born with a universal moral grammar that forces us to analyze human action in terms of its moral structure, with just as little awareness. The idea that the moral sense is an innate part of human nature is not far-fetched. A list of human universals collected by the anthropologist Donald E. Brown includes many moral concepts and emotions, including a distinction between right and wrong; empathy; fairness; admiration of generosity; rights and obligations; proscription of murder, rape and other forms of violence; redress of wrongs; sanctions for wrongs against the community; shame; and taboos.
  • Here is the worry. The scientific outlook has taught us that some parts of our subjective experience are products of our biological makeup and have no objective counterpart in the world. The qualitative difference between red and green, the tastiness of fruit and foulness of carrion, the scariness of heights and prettiness of flowers are design features of our common nervous system, and if our species had evolved in a different ecosystem or if we were missing a few genes, our reactions could go the other way. Now, if the distinction between right and wrong is also a product of brain wiring, why should we believe it is any more real than the distinction between red and green? And if it is just a collective hallucination, how could we argue that evils like genocide and slavery are wrong for everyone, rather than just distasteful to us?
  • Putting God in charge of morality is one way to solve the problem, of course, but Plato made short work of it 2,400 years ago. Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist? And if, on the other hand, God was forced by moral reasons to issue some dictates and not others — if a command to torture a child was never an option — then why not appeal to those reasons directly?
Lawrence Hrubes

Maya Angelou and the Internet's Stamp of Approval - The New Yorker - 0 views

  • his week, the United States Postal Service came in for a full news cycle’s worth of ridicule after it was&nbsp;pointed out, by the Washington&nbsp;Post, that the agency’s new Maya Angelou stamp featured a quotation that the late poet and memoirist didn’t write. The line—“A bird doesn’t sing because it has an answer, it sings because it has a song”—has been widely attributed to Angelou. And it seems like something she might have written, perhaps as a shorthand explanation for the title of her most famous book, “I Know Why the Caged Bird Sings.” But the line, in a slightly different form, was originally published in a poetry collection from 1967 called “A Cup of Sun,” by Joan Walsh Anglund. The&nbsp;Post&nbsp;reported this on Monday. By Tuesday, when such luminaries as First Lady Michelle Obama and Oprah Winfrey stood onstage in front of a giant reproduction of the Angelou stamp at the official unveiling, everyone knew that the words behind them belonged to someone else. According to the U.S.P.S., more than&nbsp;eighty million Angelou stamps were produced, and there are no plans to retract them. <!doctype html>div,ul,li{margin:0;padding:0;}.abgc{height:15px;position:absolute;right:16px;text-rendering:geometricPrecision;top:0;width:15px;z-index:9010;}.abgb{height:100%;}.abgc img{display:block;}.abgc svg{display:block;}.abgs{display:none;height:100%;}.abgl{text-decoration:none;}.cbc{background-image: url('http://pagead2.googlesyndication.com/pagead/images/x_button_blue2.png');background-position: right top;background-repeat: no-repeat;cursor:pointer;height:15px;right:0;top:0;margin:0;overflow:hidden;padding:0;position:absolute;width:16px;z-index:9010;}.cbc.cbc-hover {background-image: url('http://pagead2.googlesyndication.com/pagead/images/x_button_dark.png');}.cbc > .cb-x{height: 15px;position:absolute;width: 16px;right:0;top:0;}.cb-x > .cb-x-svg{background-color: lightgray;position:absolute;}.cbc.cbc-hover > .cb-x > .cb-x-svg{background-color: #58585a;}.cb-x > .cb-x-svg > .cb-x-svg-path{fill : #00aecd;}.cbc.cbc-hover > .cb-x > .cb-x-svg > .cb-x-svg-path{fill : white;}.cb-x > .cb-x-svg > .cb-x-svg-s-path{fill : white;} .ddmc{background:#ccc;color:#000;padding:0;position:absolute;z-index:9020;max-width:100%;box-shadow:2px 2px 3px #aaaaaa;}.ddmc.left{margin-right:0;left:0px;}.ddmc.right{margin-left:0;right:0px;}.ddmc.top{bottom:20px;}.ddmc.bottom{top:20px;}.ddmc .tip{border-left:4px solid transparent;border-right:4px solid transparent;height:0;position:absolute;width:0;font-size:0;line-height:0;}.ddmc.bottom .tip{border-bottom:4px solid #ccc;top:-4px;}.ddmc.top .tip{border-top:4px solid #ccc;bottom:-4px;}.ddmc.right .tip{right:3px;}.ddmc.left .tip{left:3px;}.ddmc .dropdown-content{display:block;}.dropdown-content{display:none;border-collapse:collapse;}.dropdown-item{font:12px Arial,sans-serif;cursor:pointer;padding:3px 7px;vertical-align:middle;}.dropdown-item-hover{background:#58585a;color:#fff;}.dropdown-content > table{border-collapse:collapse;border-spacing:0;}.dropdown-content > table > tbody > tr > td{padding:0;}Ad covers the pageStop seeing this ad.feedback_container {width: 100%;height: 100%;position: absolute;top:0;left:0;display: none;z-index: 9020;background-color: white;}.feedback_page {font-family: &quot;Arial&quot;;font-size: 13px;margin: 16px 16px 16px 16px;}.feedback_title {font-weight: bold;color: #000000;}.feedback_page a {font-weight: normal;color: #3366cc;}.feedback_description {color: #666666;line-height: 16px;margin: 12px 0 12px 0;}.feedback_closing {color: #0367ff;line-height: 16px;margin: 12px 0 12px 0;}.feedback_logo {position: absolute;right: 0;bottom: 0;margin: 0 12px 9px 0;}.feedback_logo img {height: 15px;}.survey_description {color: #666666;line-height: 17px;margin: 12px 0 10px 0;}.survey {color: #666666;line-height: 20px;}.survey_option input {margin: 0;vertical-align: middle;}.survey_option_text {margin: 0 0 0 5px;line-height: 17px;vertical-align: bottom;}.survey_option:hover {background-color: lightblue;cursor: default;}It&amp;#39;s gone. UndoWhat was wrong with this ad?InappropriateRepetitiveIrrelevantThanks for the feedback! BackWe’ll review this ad to improve your
Lawrence Hrubes

Most People Can’t Multitask, But a Few Are Exceptional. : The New Yorker - 0 views

  • In 2012, David Strayer found himself in a research lab, on the outskirts of London, observing something he hadn’t thought possible: extraordinary multitasking. For his entire career, Strayer, a professor of psychology at the University of Utah, had been studying attention—how it works and how it doesn’t. Methods had come and gone, theories had replaced theories, but one constant remained: humans couldn’t multitask. Each time someone tried to focus on more than one thing at a time, performance suffered. Most recently, Strayer had been focussing on people who drive while on the phone. Over the course of a decade, he and his colleagues had demonstrated that drivers using cell phones—even hands-free devices—were at just as high a risk of accidents as intoxicated ones. Reaction time slowed, attention decreased to the point where they’d miss more than half the things they’d otherwise see—a billboard or a child by the road, it mattered not.
  • What, then, was going on here in the London lab? The woman he was looking at—let’s call her Cassie—was an exception to what twenty-five years of research had taught him. As she took on more and more tasks, she didn’t get worse. She got better. There she was, driving, doing complex math, responding to barking prompts through a cell phone, and she wasn’t breaking a sweat. She was, in other words, what Strayer would ultimately decide to call a supertasker.
  • Cassie in particular was the best multitasker he had ever seen. “It’s a really, really hard test,” Strayer recalls. “Some people come out woozy—I have a headache, that really kind of hurts, that sort of thing. But she solved everything.
  • ...1 more annotation...
  • Their task was simple: keep your eyes on the road; keep a safe difference; brake as required. If they failed to do so, they’d eventually collide with their pace car. Then came the multitasking additions. They would have to not only drive the car but follow audio instructions from a cell phone. Specifically, they would hear a series of words, ranging from two to five at a time, and be asked to recall them in the right order. And there was a twist. Interspersed with the words were math problems. If they heard one of those, the drivers had to answer “true,” if the problem was solved correctly, or “false,” if it wasn’t. They would, for instance, hear “cat” and immediately after, “is three divided by one, minus one, equal to two?” followed by “box,” another problem, and so on. Intermittently, they would hear a prompt to “recall,” at which point, they’d have to repeat back all the words they’d heard since the last prompt. The agony lasted about an hour and a half.
Lawrence Hrubes

How a Raccoon Became an Aardvark : The New Yorker - 0 views

  • In July of 2008, Dylan Breves, then a seventeen-year-old student from New York City, made a mundane edit to a Wikipedia entry on the coati. The coati, a member of the raccoon family, is “also known as … a Brazilian aardvark,” Breves wrote. He did not cite a source for this nickname, and with good reason: he had invented it. He and his brother had spotted several coatis while on a trip to the Iguaçu Falls, in Brazil, where they had mistaken them for actual aardvarks.
  • Over time, though, something strange happened: the nickname caught on. About a year later, Breves searched online for the phrase “Brazilian aardvark.” Not only was his edit still on Wikipedia, but his search brought up hundreds of other Web sites about coatis. References to the so-called “Brazilian aardvark” have since appeared in the Independent, the Daily Mail, and even in a book published by the University of Chicago. Breves’s role in all this seems clear: a Google search for “Brazilian aardvark” will return no mentions before Breves made the edit, in July, 2008. The claim that the coati is known as a Brazilian aardvark still remains on its Wikipedia entry, only now it cites a 2010 article in the Telegraph as evidence.
  • This kind of feedback loop—wherein an error that appears on Wikipedia then trickles to sources that Wikipedia considers authoritative, which are in turn used as evidence for the original falsehood—is a documented phenomenon. There’s even a Wikipedia article describing it.
markfrankel18

Why Are Certain Smells So Hard to Identify? - The New Yorker - 0 views

  • A recent paper in the journal Cognition, for instance, quipped that if people were as bad at naming sights as they are at naming scents, “they would be diagnosed as aphasic and sent for medical help.” The paper quoted scattershot attempts by participants in a previous study to label the smell of lemon: “air freshener,” “bathroom freshener,” “magic marker,” “candy,” “lemon-fresh Pledge,” “some kind of fruit.” This sort of difficulty seems to have very little to do, however, with the nose’s actual capabilities. Last spring, an article in the journal Science reported that we are capable of discriminating more than a trillion different odors. (A biologist at Caltech subsequently disputed the finding, arguing that it contained mathematical errors, though he acknowledged the “richness of human olfactory experience.”) Whence, then, our bumbling translation of scent into speech?
  • That question was the subject, two weekends ago, of an American Association for the Advancement of Science symposium at the San Jose Convention Center (which smelled, pleasantly but nonspecifically, of clean carpet). The preëminence of eye over nose was apparent even in the symposium abstract, which touted data that “shed new light” and opened up “yet new vistas.” (Reading it over during a phone interview, Jonathan Reinarz, a professor at the University of Birmingham, in England, and the author of “Past Scents: Historical Perspectives on Smell,” asked me, “What’s wrong with a little bit of inscent?”) Nevertheless, the people on the panel were decidedly pro-smell. “One thing that everyone at this symposium will agree on is that human olfactory discriminatory power is quite excellent, if you give it a chance,” Jay Gottfried, a Northwestern University neuroscientist, told me. Noam Sobel, of the Weizmann Institute of Science, used a stark hypothetical to drive home the ways in which smell can shape behavior: “If I offer you a beautiful mate, of the gender of your choice, who smells of sewage, versus a less attractive mate who smells of sweet spice, with whom would you mate?”
  • But difficulty with talking about smell is not universal. Asifa Majid, a psycholinguist at Radboud University Nijmegen, in the Netherlands, and the organizer of the A.A.A.S. symposium, studies a group of around a thousand hunter-gatherers in northern Malaysia and southern Thailand who speak a language called Jahai. In one analysis, Majid and her colleague Niclas Burenhult found that speakers of Jahai were as good at classifying scratch-and-sniff cards as they were at classifying color chips; their English-speaking counterparts, meanwhile, tended to give meandering and disparate descriptions of scents. At the symposium, Majid presented new research involving around thirty Jahai and thirty Dutch people. In that study, the Jahai named smells in an average of two seconds, whereas the Dutch took thirteen—“and this is just to say, ‘Uh, I don’t know,’ ” Majid joked onstage.
  • ...1 more annotation...
  • Olfaction experts each have their pet theories as to why our scent lexicon is so lacking. Jonathan Reinarz blames the lingering effects of the Enlightenment, which, he says, placed a special emphasis on vision. Jay Gottfried, who is something of a nasal prodigy—he once guessed, on the basis of perfume residue, that one of his grad students had gotten back together with an ex-girlfriend—blames physiology. Whereas visual information is subject to elaborate processing in many areas of the brain, his research suggests, odor information is parsed in a much less intricate way, notably by the limbic system, which is associated with emotion and memory formation. This&nbsp;area, Gottfried said, takes “a more crude and unpolished approach to the process of naming,” and the brain’s language centers can have trouble making use of such unrefined input. Meanwhile, Donald A. Wilson, a neuroscientist at New York University School of Medicine, blames biases acquired in childhood.
Lawrence Hrubes

What to Call a Doubter of Climate Change? - NYTimes.com - 0 views

  • People who reject the findings of climate science are dismissed as “deniers” and “disinformers.” Those who accept the science are attacked as “alarmists” or “warmistas. ” The latter term, evoking the Sandinista revolutionaries of Nicaragua, is perhaps meant to suggest that the science is part of some socialist plot.
  • The petition asks the news media to abandon the most frequently used term for people who question climate science, “skeptic,” and call them “climate deniers” instead. By Degrees A column by Justin Gillis about our changing climate. 3.6 Degrees of Uncertainty DEC 15 A Tricky Transition From Fossil Fuel NOV 10 Shining Star Power on a Crucial Subject SEP 22 In the Ocean, Clues to Change AUG 11 Picking Lesser of Two Climate Evils JUL 7 See More » Climate scientists are among the most vocal critics of using the term “climate skeptic” to describe people who flatly reject their findings. They point out that skepticism is the very foundation of the scientific method. The modern consensus about the risks of climate change, they say, is based on evidence that has piled up over the course of decades and has been subjected to critical scrutiny every step of the way.
  • In other words, the climate scientists see themselves as the true skeptics, having arrived at a durable consensus about emissions simply because the evidence of risk has become overwhelming. And in this view, people who reject the evidence are phony skeptics, arguing their case by cherry-picking studies, manipulating data, and refusing to weigh the evidence as a whole.The petition asking the media to drop the “climate skeptic” label began with Mark B. Boslough, a physicist in New Mexico who grew increasingly annoyed by the term over several years. The phrase is wrong, he said, because “these people do not embrace the scientific method.”
Lawrence Hrubes

Why Are Some Cultures More Individualistic Than Others? - NYTimes.com - 0 views

  • AMERICANS and Europeans stand out from the rest of the world for our sense of ourselves as individuals. We like to think of ourselves as unique, autonomous, self-motivated, self-made. As the anthropologist Clifford Geertz observed, this is a peculiar idea.People in the rest of the world are more likely to understand themselves as interwoven with other people — as interdependent, not independent. In such social worlds, your goal is to fit in and adjust yourself to others, not to stand out. People imagine themselves as part of a larger whole — threads in a web, not lone horsemen on the frontier. In America, we say that the squeaky wheel gets the grease. In Japan, people say that the nail that stands up gets hammered down.
  • These are broad brush strokes, but the research demonstrating the differences is remarkably robust and it shows that they have far-reaching consequences. The social psychologist Richard E. Nisbett and his colleagues found that these different orientations toward independence and interdependence affected cognitive processing. For example, Americans are more likely to ignore the context, and Asians to attend to it. Show an image of a large fish swimming among other fish and seaweed fronds, and the Americans will remember the single central fish first. That’s what sticks in their minds. Japanese viewers will begin their recall with the background. They’ll also remember more about the seaweed and other objects in the scene.Another social psychologist, Hazel Rose Markus, asked people arriving at San Francisco International Airport to fill out a survey and offered them a handful of pens to use, for example four orange and one green; those of European descent more often chose the one pen that stood out, while the Asians chose the one more like the others.
  • In May, the journal Science published a study, led by a young University of Virginia psychologist, Thomas Talhelm, that ascribed these different orientations to the social worlds created by wheat farming and rice farming. Rice is a finicky crop. Because rice paddies need standing water, they require complex irrigation systems that have to be built and drained each year. One farmer’s water use affects his neighbor’s yield. A community of rice farmers needs to work together in tightly integrated ways. Not wheat farmers. Wheat needs only rainfall, not irrigation. To plant and harvest it takes half as much work as rice does, and substantially less coordination and cooperation. And historically, Europeans have been wheat farmers and Asians have grown rice.Continue reading the main story Continue reading the main story Continue reading the main story The authors of the study in Science argue that over thousands of years, rice- and wheat-growing societies developed distinctive cultures: “You do not need to farm rice yourself to inherit rice culture.”
Lawrence Hrubes

The Bitter Fight Over the Benefits of Bilingualism - The Atlantic - 0 views

  • It’s an intuitive claim, but also a profound one. It asserts that the benefits of bilingualism extend well beyond the realm of language, and into skills that we use in every aspect of our lives. This view is now widespread, heralded by a large community of scientists, promoted in books and magazines, and pushed by advocacy organizations.
  • But a growing number of psychologists say that this mountain of evidence is actually a house of cards, built upon flimsy foundations.
  • Jon Andoni Duñabeitia, a cognitive neuroscientist at the Basque Center on Cognition, Brain, and Language, was one of them. In two large studies, involving 360 and 504 children respectively, he found no evidence that Basque kids, raised on Basque and Spanish at home and at school, had better mental control than monolingual Spanish children.
  • ...1 more annotation...
  • Similar controversies have popped up throughout psychology, fueling talk of a “reproducibility crisis” in which scientists struggle to duplicate classic textbook results. In many of these cases, classic psychological phenomena that seem to be backed by years of supportive evidence, suddenly become fleeting and phantasmal. The causes are manifold. Journals are more likely to accept positive, attention-grabbing papers than negative, contradictory ones, which pushes scientists towards running small studies or tweaking experiments on the fly—practices that lead to flashy, publishable discoveries that may not actually be true.
Lawrence Hrubes

Banksy Finds a Canvas and a New Fanbase in Gaza's Ruins - NYTimes.com - 1 views

  • GAZA — Very little of Abu Shadi Shenbari’s family home remains in Beit Hanoun, in the northern Gaza Strip. Only a concrete bathroom wall was left standing when Israeli forces flattened the neighborhood near the border with Israel during the war with Hamas last summer.Though Mr. Shenbari had all but abandoned that last panel of erect concrete, in recent days he began building a wood and wire-mesh fort with a flimsy nylon roof to protect the bombed-out bathroom wall, which is now home to a 10-foot-tall depiction of a kitten.The spray-painted mural was created by the elusive British graffiti artist Banksy, who slipped in and out of Gaza in February, leaving his mark on three slabs of rubble left from Israel’s 50-day fight with Hamas, the Islamic group that controls Gaza.
Lawrence Hrubes

Everything Dies, Right? But Does Everything Have To Die? Here's A Surprise : Krulwich W... - 1 views

  • A puzzlement. Why, I wonder, are both these things true? There is an animal, a wee little thing, the size of a poppy seed, that lives in lakes and rivers and eats whatever flows through it; it's called a gastrotrich. It has an extremely short life. Hello, Goodbye, I'm Dead It hatches. Three days later, it's all grown up, with a fully adult body "complete with a mouth, a gut, sensory organs and a brain," says science writer Carl Zimmer. In 72 hours it's ready to make babies, and as soon as it does, it begins to shrivel, crumple ... and usually within a week, it's gone. Dead of old age. Sad, no? A seven-day life. But now comes the weird part. There's another very small animal (a little bigger than a gastrotrich) that also lives in freshwater ponds and lakes, also matures very quickly, also reproduces within three or four days. But, oh, my God, this one has a totally different life span (and when I say totally, I mean it's radically, wildly, unfathomably different) from a gastrotrich. It's a hydra. And what it does — or rather, what it doesn't do — is worthy of a motion picture. So we made one. Well, a little one. With my NPR colleague, science reporter Adam Cole, we're going to show you what science has learned about the hydra. Adam drew it, animated it, scored it, edited it. My only contribution was writing it with him, but what you are about to see is as close as science gets to a miracle.
markfrankel18

Psychiatry's Mind-Brain Problem - The New York Times - 1 views

  • Recently, a psychiatric study on first episodes of psychosis made front-page news. People seemed quite surprised by the finding: that lower doses of psychotropic drugs, when combined with individual psychotherapy, family education and a focus on social adaptation, resulted in decreased symptoms and increased wellness.
  • Recently, a psychiatric study on first episodes of psychosis made front-page news. People seemed quite surprised by the finding: that lower doses of psychotropic drugs, when combined with individual psychotherapy, family education and a focus on social adaptation, resulted in decreased symptoms and increased wellness. But the real surprise — and disappointment — was that this was considered so surprising.
  • But the real surprise — and disappointment — was that this was considered so surprising.
  • ...2 more annotations...
  • Unfortunately, Dr. Kane’s study arrives alongside a troubling new reality. His project was made possible by funding from the National Institute of Mental Health before it implemented a controversial requirement: Since 2014, in order to receive the institute’s support, clinical researchers must explicitly focus on a target such as a biomarker or neural circuit. It is hard to imagine how Dr. Kane’s study (or one like it) would get funding today, since it does not do this. In fact, psychiatry at present has yet to adequately identify any specific biomarkers or circuits for its major illnesses.
  • Unfortunately, Dr. Kane’s study arrives alongside a troubling new reality. His project was made possible by funding from the National Institute of Mental Health before it implemented a controversial requirement: Since 2014, in order to receive the institute’s support, clinical researchers must explicitly focus on a target such as a biomarker or neural circuit. It is hard to imagine how Dr. Kane’s study (or one like it) would get funding today, since it does not do this. In fact, psychiatry at present has yet to adequately identify any specific biomarkers or circuits for its major illnesses.
markfrankel18

Why People Mistake Good Deals for Rip-Offs : The New Yorker - 5 views

  • Last Saturday, an elderly man set up a stall near Central Park and sold eight spray-painted canvases for less than one five-hundredth of their true value. The art works were worth more than two hundred and twenty-five thousand dollars, but the man walked away with just four hundred and twenty dollars. Each canvas was an original by the enigmatic British artist Banksy, who was approaching the midpoint of a monthlong residency in New York City. Banksy had asked the man to sell the works on his behalf. For several hours, hundreds of oblivious locals and tourists ignored the quiet salesman, along with the treasure he was hiding in plain sight. The day ended with thirty paintings left unsold. One Banksy aficionado, certain she could distinguish a fake from the real thing, quietly scolded the man for knocking off the artist’s work.
  • What makes Banksy’s subversive stunt so compelling is that it forces us to acknowledge how incoherently humans derive value. How can a person be willing to pay five hundred times more than another for the same art work born in the same artist’s studio?
  • Some concepts are easy to evaluate without a reference standard. You don’t need a yardstick, for example, when deciding whether you’re well-rested or exhausted, or hot or cold, because those states are “inherently evaluable”—they’re easy to measure in absolute terms because we have sensitive biological mechanisms that respond when our bodies demand rest, or when the temperature rises far above or falls far below seventy-two degrees. Everyone agrees that three days is too long a period without sleep, but art works satisfy far too abstract a need to attract a universal valuation. When you learn that your favorite abstract art work was actually painted by a child, its value declines precipitously (unless the child happens to be your prodigious four-year-old).
  • ...1 more annotation...
  • We’re swayed by all the wrong cues, and our valuation estimates are correspondingly incoherent. Banksy knew this when he asked an elderly man to sell his works in Central Park. It’s comforting to believe that we get what we pay for, but discerning true value is as difficult as spotting a genuine Banksy canvas in a city brimming with imitations.
Lawrence Hrubes

Ceres, Pluto, and the War Over Dwarf Planets - The New Yorker - 1 views

  • Whatever the probes find, it probably won’t help untangle the tortuous reasoning that led to Pluto and Ceres being labelled as dwarf planets in the first place. That happened in 2006, a few months after New Horizons launched and about a year before Dawn did, at a meeting of the International Astronomical Union, the organization that is in charge of classifying and naming celestial objects. The I.A.U. defines a dwarf planet according to four criteria: it must orbit the sun, it must be spherical, it must not be a satellite of another planet, and it must not have “cleared the neighborhood” of other objects of comparable size. Ceres has a diameter of fewer than six hundred miles, Pluto of about fourteen hundred miles. By comparison, Mercury, now the smallest official planet in our solar system, is more than three thousand miles across. So it’s not unreasonable, Stern says, to call Pluto both a planet and a dwarf, provided that one doesn’t cancel out the other. “I’m the one who originally coined the term ‘dwarf planet,’ back in the nineteen-nineties,” he told me. “I’m fine with it. But saying a dwarf planet isn’t a planet is like saying a pygmy hippopotamus isn’t a hippopotamus. It’s scientifically indefensible.”
  • Why, then, did the I.A.U. demote Pluto? As David Spergel, the head of the astrophysics department at Princeton University, explained to me, once scientists discovered the Kuiper Belt, which includes several Pluto look-alikes, and once they discovered Eris, a dead ringer for Pluto, the organization became worried about a slippery slope. If Pluto was a planet, Eris would have to be, too, along with any number of Kuiper Belt objects. Things risked getting out of hand. Fifteen or twenty or fifty planets was too many—who would be able to remember them all? That last question may sound absurd, but in a debate held last year at the Harvard-Smithsonian Center for Astrophysics, Gareth Williams, the astronomer representing the I.A.U.’s position, couldn’t come up with a better argument. “You’d need a mnemonic to remember the mnemonic,” he said. “We really want to keep the number of planets low.” He lost the debate on the merits, but the demotion had already been won.
Lawrence Hrubes

What If We Lost the Sky? - NYTimes.com - 0 views

  • What is the sky worth? This sounds like a philosophical question, but it might become a more concrete one. A report released last week by the National Research Council called for&nbsp;research into reversing climate change through a process called albedo modification: reflecting sunlight away from earth by, for instance,&nbsp;spraying aerosols into the atmosphere. Such a process could, some say, change the appearance of the sky — and&nbsp;that in turn could affect everything from our physical health to the way we see ourselves. If albedo modification were actually implemented, Alan Robock, a professor of environmental sciences at Rutgers, told Joel Achenbach at The Washington Post: “You’d get whiter skies. People wouldn’t have blue skies anymore.” And, he added, “astronomers wouldn’t be happy, because you’d have a cloud up there permanently. It’d be hard to see the Milky Way anymore.”
  • Losing the night sky would have big consequences, said Dacher Keltner, a psychology professor at the University of California, Berkeley. His recent work looks at the health effects of the emotion of awe. In a study published in January in the journal Emotion, he&nbsp;and his team found that people who experienced a great deal&nbsp;of awe had lower levels of&nbsp;a marker of inflammation that has been linked to physical and mental ailments. One major source of awe is the natural world. “When you go outside, and you walk in a beautiful setting, and you just feel not only uplifted but you just feel stronger,” said Dr. Keltner, “there’s clearly a neurophysiological basis for that.” And, he added, looking up at a starry sky provides “almost a prototypical awe experience,” an opportunity to feel “that you are small and modest and part of something vast.”
Lawrence Hrubes

Is Bilingualism Really an Advantage? - The New Yorker - 1 views

  • Many modern language researchers agree with that premise. Not only does speaking multiple languages help us to communicate but bilingualism (or multilingualism) may actually confer distinct advantages to the developing brain. Because a bilingual child switches between languages, the theory goes, she develops enhanced executive control, or the ability to effectively manage what are called higher cognitive processes, such as&nbsp;problem-solving, memory, and thought. She becomes better able to inhibit some responses, promote others, and generally emerges with a more flexible and agile mind. It’s a phenomenon that researchers call the bilingual advantage.
  • For the first half of the twentieth century, researchers&nbsp;actually thought that bilingualism&nbsp;put a child at a&nbsp;disadvantage, something that hurt her I.Q. and verbal development. But, in recent years, the notion of a bilingual advantage has emerged from research to the contrary, research that has seemed both far-reaching and compelling, much of it coming from the careful work of the psychologist Ellen Bialystok. For many tasks, including ones that involve working memory, bilingual speakers seem to have an edge. In a 2012 review of the evidence, Bialystok showed that bilinguals did indeed show&nbsp;enhanced executive control, a quality that has been linked, among other things, to better academic performance. And when it comes to qualities like sustained attention and switching between tasks effectively, bilinguals often come out ahead. It seems fairly evident then that, given a choice, you should raise your child to speak more than one language.
  • Systematically, de Bruin combed through conference abstracts from a&nbsp;hundred and sixty-nine conferences, between 1999 and 2012, that had to do with bilingualism and executive control. The rationale was straightforward: conferences are places where people present in-progress research. They report on studies that they are running, initial results, initial thoughts. If there were&nbsp;a systematic bias in the field against reporting negative results—that is, results that show no effects of bilingualism—then there should be many more findings of that sort presented at conferences than actually become published. That’s precisely what de Bruin found. At conferences, about half the presented results provided either complete or partial support for the bilingual advantage on certain tasks, while half provided partial or complete refutation. When it came to the publications that appeared after&nbsp;the preliminary presentation, though, the split was decidedly different. Sixty-eight per cent of the studies that demonstrated a bilingual advantage found a home in&nbsp;a scientific journal, compared to just twenty-nine per cent of those that found either no difference or a monolingual edge. “Our overview,” de Bruin concluded, “shows that there is a distorted image of the actual study outcomes on bilingualism, with researchers (and media) believing that the positive effect of bilingualism on nonlinguistic cognitive processes is strong and unchallenged.”
Lawrence Hrubes

The Power of Touch - The New Yorker - 0 views

  • At a home in the Romanian city of Iași, Carlson measured cortisol levels in a group of children ranging from two months to three years old. The caregiver-to-child ratio was twenty to one, and most of the children had experienced severe neglect and sensory deprivation. Multiple times a day, Carlson took saliva samples, tracking how cortisol levels fluctuated in response to stressful events. The children, she found, were hormonally off kilter. Under normal conditions, cortisol peaks just before we wake up and then tapers off; in the&nbsp;leagăne&nbsp;infants, it peaked in the afternoon and remained elevated. Those levels, in turn, correlated with faltering performance on numerous cognitive and physical assessments. Then Carlson tried an intervention modelled on the&nbsp;work&nbsp;of Joseph Sparling, a child-development specialist, and the outcomes changed. When half of the orphans received more touching from more caregivers—an increase in hugs, holding, and the making of small adjustments to clothes and hair—their performance markedly improved. They grew bigger, stronger, and more responsive, both cognitively and emotionally, and they reacted better to stress.
  • Touch is the first of the senses to develop in the human infant, and it remains perhaps the most emotionally central throughout our lives. While many researchers have appreciated its power, others have been more circumspect. Writing in 1928, John B. Watson, one of the originators of the behaviorist school of psychology, urged parents to maintain a physical boundary between themselves and their children: “Never hug and kiss them, never let them sit on your lap. If you must, kiss them once on the forehead when they say goodnight. Shake hands with them in the morning. Give them a pat on the head if they have made an extraordinarily good job on a difficult task.” Watson acknowledged that children must be bathed, clothed, and cared for, but he believed that excessive touching—that is, caressing—would create “mawkish” adults. An untouched child, he argued, “enters manhood so bulwarked with stable work and emotional habits that no adversity can quite overwhelm him.” Now we know that, to attain that result, he should have suggested the opposite: touch, as frequent and as caring as possible
  • And yet touch is rarely&nbsp;purely physical.&nbsp;Field’s&nbsp;more recent work&nbsp;has shown that the brain is very good at distinguishing an emotional touch from a similar, but non-emotional, one. A massage chair is not a masseuse. Certain touch receptors exist solely to convey emotion to the brain, rather than sensory information about the external environment. A recent study shows that we can identify other people’s basic emotions based on how they touch us, even when they are separated from us by a curtain. And the emotions that are communicated by touch can go on to shape our behavior. One&nbsp;recent review&nbsp;found that, even if we have no conscious memory of a touch—a hand on the shoulder, say—we may be more likely to agree to a request, respond more (or less) positively to a person or product, or form closer bonds with someone.
Lawrence Hrubes

An Artist with Amnesia - The New Yorker - 2 views

  • Lately, Johnson draws for pleasure, but for three decades she had a happily hectic career as an illustrator, sometimes presenting clients with dozens of sketches a day. Her playful watercolors once adorned packages of Lotus software; for a program called Magellan, she created a ship whose masts were tethered to billowing diskettes. She made a popular postcard of two red parachutes tied together, forming a heart; several other cards were sold for years at MOMA’s gift shop. Johnson produced half a dozen covers for this magazine, including one, from 1985, that presented a sunny vision of an artist’s life: a loft cluttered with pastel canvases, each of them depicting a fragment of the skyline that is framed by a picture window. It’s as if the paintings were jigsaw pieces, and the city a puzzle being solved. Now Johnson is obsessed with making puzzles. Many times a day, she uses her grids as foundations for elaborate arrangements of letters on a page—word searches by way of Mondrian. For all the dedication that goes into her puzzles, however, they are confounding creations: very few are complete. She is assembling one of the world’s largest bodies of unfinished art.
  • Nicholas Turk-Browne, a cognitive neuroscientist at Princeton, entered the lab and greeted Johnson in the insistently zippy manner of a kindergarten teacher: “Lonni Sue! We’re going to put you in a kind of space machine and take pictures of your brain!” A Canadian with droopy dark-brown hair, he typically speaks with mellow precision. Though they had met some thirty times before, Johnson continued to regard him as an amiable stranger. Turk-Browne is one of a dozen scientists, at Princeton and at Johns Hopkins, who have been studying her, with Aline and Maggi’s consent. Aline told me, “When we realized the magnitude of Lonni Sue’s illness, my mother and I promised each other to turn what could be a tragedy into something which could help others.” Cognitive science has often gained crucial insights by studying people with singular brains, and Johnson is the first person with profound amnesia to be examined extensively with an fMRI. Several papers have been published about Johnson, and the researchers say that she could fuel at least a dozen more.
markfrankel18

Correlation is not causation | OUPblog - 0 views

  • A famous slogan in statistics is that correlation does not imply causation. We know that there is a statistical correlation between eating ice cream and drowning incidents, for instance, but ice cream consumption does not cause drowning. Where any two factors – &nbsp;A and B – are correlated, there are four possibilities: 1. A is a cause of B, 2. B is a cause of A, 3. the correlation is pure coincidence and 4., as in the ice cream case, A and B are connected by a common cause. Increased ice cream consumption and drowning rates both have a common cause in warm summer weather.
  • We know that smoking causes cancer. But we also know that many people who smoke don’t get cancer. Causal claims are not falsified by counterexamples, not even by a whole bunch of them. Contraceptive pills have been shown to cause thrombosis, but only in 1 of 1000 women. Following Popper, we could say that for every case where the cause is followed by the effect there are 999 counterexamples. Instead of falsifying the hypothesis that the pill causes thrombosis, however, we list thrombosis as a known side-effect. Causation is still very much assumed even though it occurs only in rare cases.
  • One could understand a cause, for instance, as a tendency towards its effect. Smoking has a tendency towards cancer, but it doesn’t guarantee it.. Contraception pills have a tendency towards thrombosis but a relatively small one. However, being hit by a train strongly tends towards death. We see that tendencies come in degrees, as do causes, some strongly tending towards their effect and some only weakly.
  • ...1 more annotation...
  • Correlation does not imply causation. At best it might be taken as indicative or symptomatic of it. And perfect correlation, if this is understood along the lines of Hume’s constant conjunction, does not indicate causation at all but probably something quite different.
markfrankel18

Whole Foods is taking heat for selling rabbit - Quartz - 0 views

  • But worrying about data is probably just a distraction, because, ultimately,&nbsp;“pet” is a relative term—there are&nbsp;more fish&nbsp;in our home aquariums than there are pet dogs, and any category that lumps the two together feels inadequate.
  • Herzog started thinking about this 20 years ago, when he was sitting in a hotel bar having a beer with the psychologist and animal rights activist, Ken Shapiro. Herzog&nbsp;knew Shapiro was a vegan; Shapiro&nbsp;knew Herzog ate meat. Both men had read all of the same psychology and animal-rights literature, and both spent a lot of time working through the same philosophical questions. But somehow, they came to different conclusions about how to live their lives. + “Hal, I don’t get it: why aren’t you like us?” Shapiro suddenly asked. Herzog didn’t have an answer. He still doesn’t. + “I’ve been struggling with this for a long time,” Herzog says. “I can handle moral ambiguity.&nbsp;I can deal with it. So I don’t have that need for moral consistency that animal activists do.” He laughs a little.&nbsp;“And I know that their logic is better than mine, so I don’t even try arguing with them. They win in these arguments.” +
  • Rabbits, as this passer-by is implying,&nbsp;are widely consumed in other countries. Western Europeans love rabbit sausage,&nbsp;slow-cooked rabbit stews, and braised bunny dishes, while the Chinese—who account for 30% of global rabbit consumption—consider rabbit’s head a delicacy. + Rabbit was even a staple of the American diet at one time. It helped sustain the European transplants&nbsp;who migrated west across the frontier, and during World War II, eating rabbit was promoted as an act of patriotism akin to growing a victory garden. But as small farms gave way to large-scale operations, rabbit meat’s popularity melted away and other meats took over.
  • ...1 more annotation...
  • Outside of the Union Square store, the activists are talking to a small crowd.&nbsp;“They refuse to test products on the very animals they turn around and sell as meat,” says a man wearing fuzzy bunny ears and holding a big sign. + This inconsistency presents a valid question:&nbsp;If I decide there is something ethically wrong with dripping chemicals into a rabbit’s eye to test its toxicity, is it hypocritical&nbsp;to eat that animal? + Hal Herzog talks about the relative ability of an individual to live with moral inconsistency, but perhaps the rabbit debate is less about morality and instead has to do with the categorical boundaries we use to talk about the debate in the first place.
Lawrence Hrubes

BBC - Culture - The greatest mistranslations ever - 0 views

  •  
    "Life on Mars When Italian astronomer Giovanni Virginio Schiaparelli began mapping Mars in 1877, he inadvertently sparked an entire science-fiction oeuvre. The director of Milan's Brera Observatory dubbed dark and light areas on the planet's surface 'seas' and 'continents' - labelling what he thought were channels with the Italian word 'canali'. Unfortunately, his peers translated that as 'canals', launching a theory that they had been created by intelligent lifeforms on Mars. Convinced that the canals were real, US astronomer Percival Lowell mapped hundreds of them between 1894 and 1895. Over the following two decades he published three books on Mars with illustrations showing what he thought were artificial structures built to carry water by a brilliant race of engineers. One writer influenced by Lowell's theories published his own book about intelligent Martians. In The War of the Worlds, which first appeared in serialised form in 1897, H G Wells described an invasion of Earth by deadly Martians and spawned a sci-fi subgenre. A Princess of Mars, a novel by Edgar Rice Burroughs published in 1911, also features a dying Martian civilisation, using Schiaparelli's names for features on the planet. While the water-carrying artificial trenches were a product of language and a feverish imagination, astronomers now agree that there aren't any channels on the surface of Mars. According to Nasa, "The network of crisscrossing lines covering the surface of Mars was only a product of the human tendency to see patterns, even when patterns do not exist. When looking at a faint group of dark smudges, the eye tends to connect them with straight lines." "
1 - 20 of 912 Next › Last »
Showing 20 items per page