Skip to main content

Home/ TOK Friends/ Group items tagged well-being

Rss Feed Group items tagged

huffem4

My White Friend Asked Me on Facebook to Explain White Privilege. I Decided to Be Honest... - 1 views

  • I realized many of my friends—especially the white ones—have no idea what I’ve experienced/dealt with unless they were present (and aware) when it happened. There are two reasons for this: 1) because not only as a human being do I suppress the painful and uncomfortable in an effort to make it go away, I was also taught within my community (I was raised in the ’70s and ’80s—it’s shifted somewhat now) and by society at large NOT to make a fuss, speak out, or rock the boat. To just “deal with it,” lest more trouble follow (which, sadly, it often does); 2) fear of being questioned or dismissed with “Are you sure that’s what you heard?” or “Are you sure that’s what they meant?” and being angered and upset all over again by well-meaning-but-hurtful and essentially unsupportive responses.
  • the white privilege in this situation is being able to move into a “nice” neighborhood and be accepted not harassed, made to feel unwelcome, or prone to acts of vandalism and hostility.
  • if you’ve never had a defining moment in your childhood or your life where you realize your skin color alone makes other people hate you, you have white privilege.
  • ...9 more annotations...
  • if you’ve never been ‘the only one’ of your race in a class, at a party, on a job, etc. and/or it’s been pointed out in a “playful” fashion by the authority figure in said situation, you have white privilege.
  • if you’ve never been on the receiving end of the assumption that when you’ve achieved something it’s only because it was taken away from a white person who “deserved it,” you have white privilege.
  •  if no one has ever questioned your intellectual capabilities or attendance at an elite institution based solely on your skin color, you have white privilege
  • if you have never experienced or considered how damaging it is/was/could be to grow up without myriad role models and images in school that reflect you in your required reading material or in the mainstream media, you have white privilege
  • if you’ve never been blindsided when you are just trying to enjoy a meal by a well-paid faculty member’s patronizing and racist assumptions about how grateful black people must feel to be in their presence, you have white privilege
  • if you’ve never been on the receiving end of a boss’s prejudiced, uninformed “how dare she question my ideas” badmouthing based on solely on his ego and your race, you have white privilege
  • if you’ve never had to mask the fruits of your success with a floppy-eared, stuffed bunny rabbit so you won’t get harassed by the cops on the way home from your gainful employment (or never had a first date start this way), you have white privilege
  • if you’ve never had to rewrite stories and headlines or swap photos while being trolled by racists when all you’re trying to do on a daily basis is promote positivity and share stories of hope and achievement and justice, you have white privilege
  • As to you “being part of the problem,” trust me, nobody is mad at you for being white. Nobody. Just like nobody should be mad at me for being black. Or female. Or whatever. But what IS being asked of you is to acknowledge that white privilege DOES exist and not only to treat people of races that differ from yours “with respect and humor,” but also to stand up for fair treatment and justice, not to let “jokes” or “off-color” comments by friends, co-workers, or family slide by without challenge, and to continually make an effort to put yourself in someone else’s shoes, so we may all cherish and respect our unique and special contributions to society as much as we do our common ground.
Javier E

There's More to Life Than Being Happy - Emily Esfahani Smith - The Atlantic - 1 views

  • "Everything can be taken from a man but one thing," Frankl wrote in Man's Search for Meaning, "the last of the human freedoms -- to choose one's attitude in any given set of circumstances, to choose one's own way."
  • This uniqueness and singleness which distinguishes each individual and gives a meaning to his existence has a bearing on creative work as much as it does on human love. When the impossibility of replacing a person is realized, it allows the responsibility which a man has for his existence and its continuance to appear in all its magnitude. A man who becomes conscious of the responsibility he bears toward a human being who affectionately waits for him, or to an unfinished work, will never be able to throw away his life. He knows the "why" for his existence, and will be able to bear almost any "how."
  • "To the European," Frankl wrote, "it is a characteristic of the American culture that, again and again, one is commanded and ordered to 'be happy.' But happiness cannot be pursued; it must ensue. One must have a reason to 'be happy.'"
  • ...19 more annotations...
  • the book's ethos -- its emphasis on meaning, the value of suffering, and responsibility to something greater than the self -- seems to be at odds with our culture, which is more interested in the pursuit of individual happiness than in the search for meaning.
  • "Happiness without meaning characterizes a relatively shallow, self-absorbed or even selfish life, in which things go well, needs and desire are easily satisfied, and difficult or taxing entanglements are avoided,"
  • about 4 out of 10 Americans have not discovered a satisfying life purpose. Forty percent either do not think their lives have a clear sense of purpose or are neutral about whether their lives have purpose. Nearly a quarter of Americans feel neutral or do not have a strong sense of what makes their lives meaningful
  • the single-minded pursuit of happiness is ironically leaving people less happy, according to recent research. "It is the very pursuit of happiness," Frankl knew, "that thwarts happiness."
  • Examining their self-reported attitudes toward meaning, happiness, and many other variables -- like stress levels, spending patterns, and having children -- over a month-long period, the researchers found that a meaningful life and happy life overlap in certain ways, but are ultimately very different. Leading a happy life, the psychologists found, is associated with being a "taker" while leading a meaningful life corresponds with being a "giver."
  • How do the happy life and the meaningful life differ?
  • While happiness is an emotion felt in the here and now, it ultimately fades away, just as all emotions do
  • Happiness, they found, is about feeling good. Specifically, the researchers found that people who are happy tend to think that life is easy, they are in good physical health, and they are able to buy the things that they need and want.
  • Most importantly from a social perspective, the pursuit of happiness is associated with selfish behavior -- being, as mentioned, a "taker" rather than a "giver." The psychologists give an evolutionary explanation for this: happiness is about drive reduction. If you have a need or a desire -- like hunger -- you satisfy it, and that makes you happy. People become happy, in other words, when they get wh
  • Happy people get a lot of joy from receiving benefits from others while people leading meaningful lives get a lot of joy from giving to others,"
  • People who have high meaning in their lives are more likely to help others in need.
  • What sets human beings apart from animals is not the pursuit of happiness, which occurs all across the natural world, but the pursuit of meaning, which is unique to humans
  • People whose lives have high levels of meaning often actively seek meaning out even when they know it will come at the expense of happiness. Because they have invested themselves in something bigger than themselves, they also worry more and have higher levels of stress and anxiety in their lives than happy people.
  • Meaning is not only about transcending the self, but also about transcending the present moment -- which is perhaps the most important finding of the study,
  • nearly 60 percent all Americans today feel happy without a lot of stress or worry
  • Meaning, on the other hand, is enduring. It connects the past to the present to the future. "Thinking beyond the present moment, into the past or future, was a sign of the relatively meaningful but unhappy life,"
  • Having negative events happen to you, the study found, decreases your happiness but increases the amount of meaning you have in life.
  • "If there is meaning in life at all," Frankl wrote, "then there must be meaning in suffering."
  • "Being human always points, and is directed, to something or someone, other than oneself -- be it a meaning to fulfill or another human being to encounter. The more one forgets himself -- by giving himself to a cause to serve or another person to love -- the more human he is."
Javier E

Joshua Foer: John Quijada and Ithkuil, the Language He Invented : The New Yorker - 2 views

  • Languages are something of a mess. They evolve over centuries through an unplanned, democratic process that leaves them teeming with irregularities, quirks, and words like “knight.” No one who set out to design a form of communication would ever end up with anything like English, Mandarin, or any of the more than six thousand languages spoken today.“Natural languages are adequate, but that doesn’t mean they’re optimal,” John Quijada, a fifty-four-year-old former employee of the California State Department of Motor Vehicles, told me. In 2004, he published a monograph on the Internet that was titled “Ithkuil: A Philosophical Design for a Hypothetical Language.” Written like a linguistics textbook, the fourteen-page Web site ran to almost a hundred and sixty thousand words. It documented the grammar, syntax, and lexicon of a language that Quijada had spent three decades inventing in his spare time. Ithkuil had never been spoken by anyone other than Quijada, and he assumed that it never would be.
  • his “greater goal” was “to attempt the creation of what human beings, left to their own devices, would never create naturally, but rather only by conscious intellectual effort: an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language.”
  • Ithkuil, one Web site declared, “is a monument to human ingenuity and design.” It may be the most complete realization of a quixotic dream that has entranced philosophers for centuries: the creation of a more perfect language.
  • ...25 more annotations...
  • Since at least the Middle Ages, philosophers and philologists have dreamed of curing natural languages of their flaws by constructing entirely new idioms according to orderly, logical principles.
  • What if, they wondered, you could create a universal written language that could be understood by anyone, a set of “real characters,” just as the creation of Arabic numerals had done for counting? “This writing will be a kind of general algebra and calculus of reason, so that, instead of disputing, we can say that ‘we calculate,’ ” Leibniz wrote, in 1679.
  • nventing new forms of speech is an almost cosmic urge that stems from what the linguist Marina Yaguello, the author of “Lunatic Lovers of Language,” calls “an ambivalent love-hate relationship.” Language creation is pursued by people who are so in love with what language can do that they hate what it doesn’t. “I don’t believe any other fantasy has ever been pursued with so much ardor by the human spirit, apart perhaps from the philosopher’s stone or the proof of the existence of God; or that any other utopia has caused so much ink to flow, apart perhaps from socialism,”
  • Quijada began wondering, “What if there were one single language that combined the coolest features from all the world’s languages?”
  • Solresol, the creation of a French musician named Jean-François Sudre, was among the first of these universal languages to gain popular attention. It had only seven syllables: Do, Re, Mi, Fa, So, La, and Si. Words could be sung, or performed on a violin. Or, since the language could also be translated into the seven colors of the rainbow, sentences could be woven into a textile as a stream of colors.
  • “I had this realization that every individual language does at least one thing better than every other language,” he said. For example, the Australian Aboriginal language Guugu Yimithirr doesn’t use egocentric coördinates like “left,” “right,” “in front of,” or “behind.” Instead, speakers use only the cardinal directions. They don’t have left and right legs but north and south legs, which become east and west legs upon turning ninety degrees
  • Among the Wakashan Indians of the Pacific Northwest, a grammatically correct sentence can’t be formed without providing what linguists refer to as “evidentiality,” inflecting the verb to indicate whether you are speaking from direct experience, inference, conjecture, or hearsay.
  • In his “Essay Towards a Real Character, and a Philosophical Language,” from 1668, Wilkins laid out a sprawling taxonomic tree that was intended to represent a rational classification of every concept, thing, and action in the universe. Each branch along the tree corresponded to a letter or a syllable, so that assembling a word was simply a matter of tracing a set of forking limbs
  • he started scribbling notes on an entirely new grammar that would eventually incorporate not only Wakashan evidentiality and Guugu Yimithirr coördinates but also Niger-Kordofanian aspectual systems, the nominal cases of Basque, the fourth-person referent found in several nearly extinct Native American languages, and a dozen other wild ways of forming sentences.
  • he discovered “Metaphors We Live By,” a seminal book, published in 1980, by the cognitive linguists George Lakoff and Mark Johnson, which argues that the way we think is structured by conceptual systems that are largely metaphorical in nature. Life is a journey. Time is money. Argument is war. For better or worse, these figures of speech are profoundly embedded in how we think.
  • I asked him if he could come up with an entirely new concept on the spot, one for which there was no word in any existing language. He thought about it for a moment. “Well, no language, as far as I know, has a single word for that chin-stroking moment you get, often accompanied by a frown on your face, when someone expresses an idea that you’ve never thought of and you have a moment of suddenly seeing possibilities you never saw before.” He paused, as if leafing through a mental dictionary. “In Ithkuil, it’s ašţal.”
  • Neither Sapir nor Whorf formulated a definitive version of the hypothesis that bears their names, but in general the theory argues that the language we speak actually shapes our experience of reality. Speakers of different languages think differently. Stronger versions of the hypothesis go even further than this, to suggest that language constrains the set of possible thoughts that we can have. In 1955, a sociologist and science-fiction writer named James Cooke Brown decided he would test the Sapir-Whorf hypothesis by creating a “culturally neutral” “model language” that might recondition its speakers’ brains.
  • most conlangers come to their craft by way of fantasy and science fiction. J. R. R. Tolkien, who called conlanging his “secret vice,” maintained that he created the “Lord of the Rings” trilogy for the primary purpose of giving his invented languages, Quenya, Sindarin, and Khuzdul, a universe in which they could be spoken. And arguably the most commercially successful invented language of all time is Klingon, which has its own translation of “Hamlet” and a dictionary that has sold more than three hundred thousand copies.
  • He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.
  • Brown based the grammar for his ten-thousand-word language, called Loglan, on the rules of formal predicate logic used by analytical philosophers. He hoped that, by training research subjects to speak Loglan, he might turn them into more logical thinkers. If we could change how we think by changing how we speak, then the radical possibility existed of creating a new human condition.
  • today the stronger versions of the Sapir-Whorf hypothesis have “sunk into . . . disrepute among respectable linguists,” as Guy Deutscher writes, in “Through the Looking Glass: Why the World Looks Different in Other Languages.” But, as Deutscher points out, there is evidence to support the less radical assertion that the particular language we speak influences how we perceive the world. For example, speakers of gendered languages, like Spanish, in which all nouns are either masculine or feminine, actually seem to think about objects differently depending on whether the language treats them as masculine or feminine
  • The final version of Ithkuil, which Quijada published in 2011, has twenty-two grammatical categories for verbs, compared with the six—tense, aspect, person, number, mood, and voice—that exist in English. Eighteen hundred distinct suffixes further refine a speaker’s intent. Through a process of laborious conjugation that would befuddle even the most competent Latin grammarian, Ithkuil requires a speaker to home in on the exact idea he means to express, and attempts to remove any possibility for vagueness.
  • Every language has its own phonemic inventory, or library of sounds, from which a speaker can string together words. Consonant-poor Hawaiian has just thirteen phonemes. English has around forty-two, depending on dialect. In order to pack as much meaning as possible into each word, Ithkuil has fifty-eight phonemes. The original version of the language included a repertoire of grunts, wheezes, and hacks that are borrowed from some of the world’s most obscure tongues. One particular hard-to-make clicklike sound, a voiceless uvular ejective affricate, has been found in only a few other languages, including the Caucasian language Ubykh, whose last native speaker died in 1992.
  • Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.
  • “I wanted to use Ithkuil to show how you would discuss philosophy and emotional states transparently,” Quijada said. To attempt to translate a thought into Ithkuil requires investigating a spectrum of subtle variations in meaning that are not recorded in any natural language. You cannot express a thought without first considering all the neighboring thoughts that it is not. Though words in Ithkuil may sound like a hacking cough, they have an inherent and unavoidable depth. “It’s the ideal language for political and philosophical debate—any forum where people hide their intent or obfuscate behind language,” Quijada co
  • In Ithkuil, the difference between glimpsing, glancing, and gawking is the mere flick of a vowel. Each of these distinctions is expressed simply as a conjugation of the root word for vision. Hunched over the dining-room table, Quijada showed me how he would translate “gawk” into Ithkuil. First, though, since words in Ithkuil are assembled from individual atoms of meaning, he had to engage in some introspection about what exactly he meant to say.For fifteen minutes, he flipped backward and forward through his thick spiral-bound manuscript, scratching his head, pondering each of the word’s aspects, as he packed the verb with all of gawking’s many connotations. As he assembled the evolving word from its constituent meanings, he scribbled its pieces on a notepad. He added the “second degree of the affix for expectation of outcome” to suggest an element of surprise that is more than mere unpreparedness but less than outright shock, and the “third degree of the affix for contextual appropriateness” to suggest an element of impropriety that is less than scandalous but more than simply eyebrow-raising. As he rapped his pen against the notepad, he paged through his manuscript in search of the third pattern of the first stem of the root for “shock” to suggest a “non-volitional physiological response,” and then, after several moments of contemplation, he decided that gawking required the use of the “resultative format” to suggest “an event which occurs in conjunction with the conflated sense but is also caused by it.” He eventually emerged with a tiny word that hardly rolled off the tongue: apq’uxasiu. He spoke the first clacking syllable aloud a couple of times before deciding that he had the pronunciation right, and then wrote it down in the script he had invented for printed Ithkuil:
  • “You can make up words by the millions to describe concepts that have never existed in any language before,” he said.
  • Many conlanging projects begin with a simple premise that violates the inherited conventions of linguistics in some new way. Aeo uses only vowels. Kēlen has no verbs. Toki Pona, a language inspired by Taoist ideals, was designed to test how simple a language could be. It has just a hundred and twenty-three words and fourteen basic sound units. Brithenig is an answer to the question of what English might have sounded like as a Romance language, if vulgar Latin had taken root on the British Isles. Láadan, a feminist language developed in the early nineteen-eighties, includes words like radíidin, defined as a “non-holiday, a time allegedly a holiday but actually so much a burden because of work and preparations that it is a dreaded occasion; especially when there are too many guests and none of them help.”
  • “We think that when a person learns Ithkuil his brain works faster,” Vishneva told him, in Russian. She spoke through a translator, as neither she nor Quijada was yet fluent in their shared language. “With Ithkuil, you always have to be reflecting on yourself. Using Ithkuil, we can see things that exist but don’t have names, in the same way that Mendeleyev’s periodic table showed gaps where we knew elements should be that had yet to be discovered.”
  • Lakoff, who is seventy-one, bearded, and, like Quijada, broadly built, seemed to have read a fair portion of the Ithkuil manuscript and familiarized himself with the language’s nuances.“There are a whole lot of questions I have about this,” he told Quijada, and then explained how he felt Quijada had misread his work on metaphor. “Metaphors don’t just show up in language,” he said. “The metaphor isn’t in the word, it’s in the idea,” and it can’t be wished away with grammar.“For me, as a linguist looking at this, I have to say, ‘O.K., this isn’t going to be used.’ It has an assumption of efficiency that really isn’t efficient, given how the brain works. It misses the metaphor stuff. But the parts that are successful are really nontrivial. This may be an impossible language,” he said. “But if you think of it as a conceptual-art project I think it’s fascinating.”
Javier E

The new science of death: 'There's something happening in the brain that makes no sense... - 0 views

  • Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness
  • Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived
  • when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit.
  • ...43 more annotations...
  • Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies
  • According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.
  • In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.
  • in 1975, an American medical student named Raymond Moody published a book called Life After Life.
  • Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased.
  • “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.
  • Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.
  • “I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”
  • Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences
  • Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.
  • near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine
  • It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care
  • The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individua
  • Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.
  • Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain.
  • Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221.
  • Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries
  • “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,”
  • “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”
  • it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest
  • In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”.
  • Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.
  • That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques, and soon people were being revived in previously unthinkable, if still modest, numbers.
  • scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.
  • In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest
  • In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.
  • That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain
  • Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.
  • In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down
  • To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead.
  • At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,”
  • In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,”
  • The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?
  • The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.
  • Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course,
  • “As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.
  • n those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irre
  • something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.
  • Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.
  • In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.
  • “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.
  • “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?”
  • Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing.
Javier E

Reasons for COVID-19 Optimism on T-Cells and Herd Immunity - 0 views

  • It may well be the case that some amount of community protection kicks in below 60 percent exposure, and possibly quite a bit below that threshold, and that those who exhibit a cross-reactive T-cell immune response, while still susceptible to infection, may also have some meaningful amount of protection against severe disease.
  • early returns suggest that while the maximalist interpretation of each hypothesis is not very credible — herd immunity has probably not been reached in many places, and cross-reactive T-cell response almost certainly does not functionally immunize those who have it — more modest interpretations appear quite plausible.
  • Friston suggested that the truly susceptible portion of the population was certainly not 100 percent, as most modelers and conventional wisdom had it, but a much smaller share — surely below 50 percent, he said, and likely closer to about 20 percent. The analysis was ongoing, he said, but, “I suspect, once this has been done, it will look like the effective non-susceptible portion of the population will be about 80 percent. I think that’s what’s going to happen.”
  • ...31 more annotations...
  • one of the leading modelers, Gabriela Gomes, suggested the entire area of research was being effectively blackballed out of fear it might encourage a relaxation of pandemic vigilance. “This is the very sad reason for the absence of more optimistic projections on the development of this pandemic in the scientific literature,” she wrote on Twitter. “Our analysis suggests that herd-immunity thresholds are being achieved despite strict social-distancing measures.”
  • Gomes suggested, herd immunity could happen with as little as one quarter of the population of a community exposed — or perhaps just 20 percent. “We just keep running the models, and it keeps coming back at less than 20 percent,” she told Hamblin. “It’s very striking.” Such findings, if they held up, would be very instructive, as Hamblin writes: “It would mean, for instance, that at 25 percent antibody prevalence, New York City could continue its careful reopening without fear of another major surge in cases.”
  • But for those hoping that 25 percent represents a true ceiling for pandemic spread in a given community, well, it almost certainly does not, considering that recent serological surveys have shown that perhaps 93 percent of the population of Iquitos, Peru, has contracted the disease; as have more than half of those living in Indian slums; and as many as 68 percent in particular neighborhoods of New York City
  • overshoot of that scale would seem unlikely if the “true” threshold were as low as 20 or 25 percent.
  • But, of course, that threshold may not be the same in all places, across all populations, and is surely affected, to some degree, by the social behavior taken to protect against the spread of the disease.
  • we probably err when we conceive of group immunity in simplistically binary terms. While herd immunity is a technical term referring to a particular threshold at which point the disease can no longer spread, some amount of community protection against that spread begins almost as soon as the first people are exposed, with each case reducing the number of unexposed and vulnerable potential cases in the community by one
  • you would not expect a disease to spread in a purely exponential way until the point of herd immunity, at which time the spread would suddenly stop. Instead, you would expect that growth to slow as more people in the community were exposed to the disease, with most of them emerging relatively quickly with some immune response. Add to that the effects of even modest, commonplace protections — intuitive social distancing, some amount of mask-wearing — and you could expect to get an infection curve that tapers off well shy of 60 percent exposure.
  • Looking at the data, we see that transmissions in many severely impacted states began to slow down in July, despite limited interventions. This is especially notable in states like Arizona, Florida, and Texas. While we believe that changes in human behavior and changes in policy (such as mask mandates and closing of bars/nightclubs) certainly contributed to the decrease in transmission, it seems unlikely that these were the primary drivers behind the decrease. We believe that many regions obtained a certain degree of temporary herd immunity after reaching 10-35 percent prevalence under the current conditions. We call this 10-35 percent threshold the effective herd immunity threshold.
  • Indeed, that is more or less what was recently found by Youyang Gu, to date the best modeler of pandemic spread in the U.S
  • he cautioned again that he did not mean to imply that the natural herd-immunity level was as low as 10 percent, or even 35 percent. Instead, he suggested it was a plateau determined in part by better collective understanding of the disease and what precautions to take
  • Gu estimates national prevalence as just below 20 percent (i.e., right in the middle of his range of effective herd immunity), it still counts, I think, as encouraging — even if people in hard-hit communities won’t truly breathe a sigh of relief until vaccines arrive.
  • If you can get real protection starting at 35 percent, it means that even a mediocre vaccine, administered much more haphazardly to a population with some meaningful share of vaccination skeptics, could still achieve community protection pretty quickly. And that is really significant — making both the total lack of national coordination on rollout and the likely “vaccine wars” much less consequential.
  • At least 20 percent of the public, and perhaps 50 percent, had some preexisting, cross-protective T-cell response to SARS-CoV-2, according to one much-discussed recent paper. An earlier paper had put the figure at between 40 and 60 percent. And a third had found an even higher prevalence: 81 percent.
  • The T-cell story is similarly encouraging in its big-picture implications without being necessarily paradigm-changing
  • These numbers suggest their own heterogeneity — that different populations, with different demographics, would likely exhibit different levels of cross-reactive T-cell immune response
  • The most optimistic interpretation of the data was given to me by Francois Balloux, a somewhat contrarian disease geneticist and the director of the University College of London’s Genetics Institute
  • According to him, a cross-reactive T-cell response wouldn’t prevent infection, but would probably mean a faster immune response, a shorter period of infection, and a “massively” reduced risk of severe illness — meaning, he guessed, that somewhere between a third and three-quarters of the population carried into the epidemic significant protection against its scariest outcomes
  • the distribution of this T-cell response could explain at least some, and perhaps quite a lot, of COVID-19’s age skew when it comes to disease severity and mortality, since the young are the most exposed to other coronaviruses, and the protection tapers as you get older and spend less time in environments, like schools, where these viruses spread so promiscuously.
  • Balloux told me he believed it was also possible that the heterogeneous distribution of T-cell protection also explains some amount of the apparent decline in disease severity over time within countries on different pandemic timelines — a phenomenon that is more conventionally attributed to infection spreading more among the young, better treatment, and more effective protection of the most vulnerable (especially the old).
  • Going back to Youyang Gu’s analysis, what he calls the “implied infection fatality rate” — essentially an estimated ratio based on his modeling of untested cases — has fallen for the country as a whole from about one percent in March to about 0.8 percent in mid-April, 0.6 percent in May, and down to about 0.25 percent today.
  • even as we have seemed to reach a second peak of coronavirus deaths, the rate of death from COVID-19 infection has continued to decline — total deaths have gone up, but much less than the number of cases
  • In other words, at the population level, the lethality of the disease in America has fallen by about three-quarters since its peak. This is, despite everything that is genuinely horrible about the pandemic and the American response to it, rather fantastic.
  • there may be some possible “mortality displacement,” whereby the most severe cases show up first, in the most susceptible people, leaving behind a relatively protected population whose experience overall would be more mild, and that T-cell response may play a significant role in determining that susceptibility.
  • That, again, is Balloux’s interpretation — the most expansive assessment of the T-cell data offered to me
  • The most conservative assessment came from Sarah Fortune, the chair of Harvard’s Department of Immunology
  • Fortune cautioned not to assume that cross-protection was playing a significant role in determining severity of illness in a given patient. Those with such a T-cell response, she told me, would likely see a faster onset of robust response, yes, but that may or may not yield a shorter period of infection and viral shedding
  • Most of the scientists, doctors, epidemiologists, and immunologists I spoke to fell between those two poles, suggesting the T-cell cross-immunity findings were significant without necessarily being determinative — that they may help explain some of the shape of pandemic spread through particular populations, but only some of the dynamics of that spread.
  • he told me he believed, in the absence of that data, that T-cell cross-immunity from exposure to previous coronaviruses “might explain different disease severity in different people,” and “could certainly be part of the explanation for the age skew, especially for why the very young fare so well.”
  • the headline finding was quite clear and explicitly stated: that preexisting T-cell response came primarily via the variety of T-cells called CD4 T-cells, and that this dynamic was consistent with the hypothesis that the mechanism was inherited from previous exposure to a few different “common cold” coronaviruses
  • “This potential preexisting cross-reactive T-cell immunity to SARS-CoV-2 has broad implications,” the authors wrote, “as it could explain aspects of differential COVID-19 clinical outcomes, influence epidemiological models of herd immunity, or affect the performance of COVID-19 candidate vaccines.”
  • “This is at present highly speculative,” they cautioned.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Keiko E

University of Wisconsin Study Finds Eudaimonic Happiness Lessens the 'Bite' of Risk Fac... - 0 views

  • Some of the newest evidence suggests that people who focus on living with a sense of purpose as they age are more likely to remain cognitively intact, have better mental health and even live longer than people who focus on achieving feelings of happiness.
  • "Eudaimonia" is a Greek word associated with Aristotle and often mistranslated as "happiness"—which has contributed to misunderstandings about what happiness is. Some experts say Aristotle meant "well-being" when he wrote that humans can attain eudaimonia by fulfilling their potential. Today, the goal of understanding happiness and well-being, beyond philosophical interest, is part of a broad inquiry into aging and why some people avoid early death and disease. Psychologists investigating eudaimonic versus hedonic types of happiness over the past five to 10 years have looked at each type's unique effects on physical and psychological health.
  • In a separate analysis of the same group of subjects, researchers have found those with greater purpose in life were less likely to be impaired in carrying out living and mobility functions, like housekeeping, managing money and walking up or down stairs. And over a five-year period they were significantly less likely to die—by some 57%— than those with low purpose in life.
  • ...1 more annotation...
  • The two types of well-being aren't necessarily at odds, and there is overlap. Striving to live a meaningful life or to do good work should bring about feelings of happiness, of course. But people who primarily seek extrinsic rewards, such as money or status, often aren't as happy, says Richard Ryan, professor of psychology, psychiatry and education at the University of Rochester.
  •  
    The relationship between "happiness" and "well-being" and how they affect people.
Javier E

Social Media and the Devolution of Friendship: Full Essay (Pts I & II) » Cybo... - 1 views

  • social networking sites create pressure to put time and effort into tending weak ties, and how it can be impossible to keep up with them all. Personally, I also find it difficult to keep up with my strong ties. I’m a great “pick up where we left off” friend, as are most of the people closest to me (makes sense, right?). I’m decidedly sub-awesome, however, at being in constant contact with more than a few people at a time.
  • the devolution of friendship. As I explain over the course of this essay, I link the devolution of friendship to—but do not “blame” it on—the affordances of various social networking platforms, especially (but not exclusively) so-called “frictionless sharing” features.
  • I’m using the word here in the same way that people use it to talk about the devolution of health care. One example of devolution of health care is some outpatient surgeries: patients are allowed to go home after their operations, but they still require a good deal of post-operative care such as changing bandages, irrigating wounds, administering medications, etc. Whereas before these patients would stay in the hospital and nurses would perform the care-labor necessary for their recoveries, patients must now find their own caregivers (usually family members or friends; sometimes themselves) to perform free care-labor. In this context, devolution marks the shift of labor and responsibility away from the medical establishment and onto the patient; within the patient-medical establishment collaboration, the patient must now provide a greater portion of the necessary work. Similarly, in some ways, we now expect our friends to do a greater portion of the work of being friends with us.
  • ...13 more annotations...
  • Through social media, “sharing with friends” is rationalized to the point of relentless efficiency. The current apex of such rationalization is frictionless sharing: we no longer need to perform the labor of telling our individual friends about what we read online, or of copy-pasting links and emailing them to “the list,” or of clicking a button for one-step posting of links on our Facebook walls. With frictionless sharing, all we have to do is look, or listen; what we’ve read or watched or listened to is then “shared” or “scrobbled” to our Facebook, Twitter, Tumblr, or whatever other online profiles. Whether we share content actively or passively, however, we feel as though we’ve done our half of the friendship-labor by ‘pushing’ the information to our walls, streams, and tumblelogs. It’s then up to our friends to perform their halves of the friendship-labor by ‘pulling’ the information we share from those platforms.
  • We’re busy people; we like the idea of making one announcement on Facebook and being done with it, rather than having to repeat the same story over and over again to different friends individually. We also like not always having to think about which friends might like which stories or songs; we like the idea of sharing with all of our friends at once, and then letting them sort out amongst themselves who is and isn’t interested. Though social media can create burdensome expectations to keep up with strong ties, weak ties, and everyone in between, social media platforms can also be very efficient. Using the same moment of friendship-labor to tend multiple friendships at once kills more birds with fewer stones.
  • sometimes we like the devolution of friendship. When we have to ‘pull’ friendship-content instead of receiving it in a ‘push’, we can pick and choose which content items to pull. We can ignore the baby pictures, or the pet pictures, or the sushi pictures—whatever it is our friends post that we only pretend to care about
  • I’ve been thinking since, however, on what it means to view our friends as “generalized others.” I may now feel like less of like “creepy stalker” when I click on a song in someone’s Spotify feed, but I don’t exactly feel ‘shared with’ either. Far as I know, I’ve never been SpotiVaguebooked (or SubSpotified?); I have no reason to think anyone is speaking to me personally as they listen to music, or as they choose not to disable scrobbling (if they make that choice consciously at all). I may have been granted the opportunity to view something, but it doesn’t follow that what I’m viewing has anything to do with me unless I choose to make it about me. Devolved friendship means it’s not up to us to interact with our friends personally; instead it’s now up to our friends to make our generalized broadcasts personal.
  • While I won’t go so far as to say they’re definitely ‘problems,’ there are two major things about devolved friendship that I think are worth noting. The first is the non-uniform rationalization of friendship-labor, and the second is the depersonalization of friendship-labor.
  • In short, “sharing” has become a lot easier and a lot more efficient, but “being shared with” has become much more time-consuming, demanding, and inefficient (especially if we don’t ignore most of our friends most of the time). Given this, expecting our friends to keep up with our social media content isn’t expecting them to meet us halfway; it’s asking them to take on the lion’s share of staying in touch with us. Our jobs (in this role) have gotten easier; our friends’ jobs have gotten harder.
  • The second thing worth noting is that devolved friendship is also depersonalized friendship.
  • Personal interaction doesn’t just happen on Spotify, and since I was hoping Spotify would be the New Porch, I initially found Spotify to be somewhat lonely-making. It’s the mutual awareness of presence that gives companionate silence its warmth, whether in person or across distance. The silence within Spotify’s many sounds, on the other hand, felt more like being on the outside looking in. This isn’t to say that Spotify can’t be social in a more personal way; once I started sending tracks to my friends, a few of them started sending tracks in return. But it took a lot more work to get to that point, which gets back to the devolution of friendship (as I explain below).
  • Within devolved friendship interactions, it takes less effort to be polite while secretly waiting for someone to please just stop talking.
  • When we consider the lopsided rationalization of ‘sharing’ and ‘shared with,’ as well as the depersonalization of frictionless sharing and generalized broadcasting, what becomes clear is this: the social media deck is stacked in such a way as to make being ‘a self’ easier and more rewarding than being ‘a friend.’
  • It’s easy to share, to broadcast, to put our selves and our tastes and our identity performances out into the world for others to consume; what feedback and friendship we get in return comes in response to comparatively little effort and investment from us. It takes a lot more work, however, to do the consumption, to sift through everything all (or even just some) of our friends produce, to do the work of connecting to our friends’ generalized broadcasts so that we can convert their depersonalized shares into meaningful friendship-labor.
  • We may be prosumers of social media, but the reward structures of social media sites encourage us to place greater emphasis on our roles as share-producers—even though many of us probably spend more time consuming shared content than producing it. There’s a reason for this, of course; the content we produce (for free) is what fuels every last ‘Web 2.0’ machine, and its attendant self-centered sociality is the linchpin of the peculiarly Silicon Valley concept of “Social” (something Nathan Jurgenson and I discuss together in greater detail here). It’s not super-rewarding to be one of ten people who “like” your friend’s shared link, but it can feel rewarding to get 10 “likes” on something you’ve shared—even if you have hundreds or thousands of ‘friends.’ Sharing is easy; dealing with all that shared content is hard.
  • t I wonder sometimes if the shifts in expectation that accompany devolved friendship don’t migrate across platforms and contexts in ways we don’t always see or acknowledge. Social media affects how we see the world—and how we feel about being seen in the world—even when we’re not engaged directly with social media websites. It’s not a stretch, then, to imagine that the affordances of social media platforms might also affect how we see friendship and our obligations as friends most generally.
Javier E

Stop Googling. Let's Talk. - The New York Times - 3 views

  • In a 2015 study by the Pew Research Center, 89 percent of cellphone owners said they had used their phones during the last social gathering they attended. But they weren’t happy about it; 82 percent of adults felt that the way they used their phones in social settings hurt the conversation.
  • I’ve been studying the psychology of online connectivity for more than 30 years. For the past five, I’ve had a special focus: What has happened to face-to-face conversation in a world where so many people say they would rather text than talk?
  • Young people spoke to me enthusiastically about the good things that flow from a life lived by the rule of three, which you can follow not only during meals but all the time. First of all, there is the magic of the always available elsewhere. You can put your attention wherever you want it to be. You can always be heard. You never have to be bored.
  • ...23 more annotations...
  • But the students also described a sense of loss.
  • A 15-year-old boy told me that someday he wanted to raise a family, not the way his parents are raising him (with phones out during meals and in the park and during his school sports events) but the way his parents think they are raising him — with no phones at meals and plentiful family conversation. One college junior tried to capture what is wrong about life in his generation. “Our texts are fine,” he said. “It’s what texting does to our conversations when we are together that’s the problem.”
  • One teacher observed that the students “sit in the dining hall and look at their phones. When they share things together, what they are sharing is what is on their phones.” Is this the new conversation? If so, it is not doing the work of the old conversation. The old conversation taught empathy. These students seem to understand each other less.
  • In 2010, a team at the University of Michigan led by the psychologist Sara Konrath put together the findings of 72 studies that were conducted over a 30-year period. They found a 40 percent decline in empathy among college students, with most of the decline taking place after 2000.
  • We’ve gotten used to being connected all the time, but we have found ways around conversation — at least from conversation that is open-ended and spontaneous, in which we play with ideas and allow ourselves to be fully present and vulnerable. But it is in this type of conversation — where we learn to make eye contact, to become aware of another person’s posture and tone, to comfort one another and respectfully challenge one another — that empathy and intimacy flourish. In these conversations, we learn who we are.
  • the trend line is clear. It’s not only that we turn away from talking face to face to chat online. It’s that we don’t allow these conversations to happen in the first place because we keep our phones in the landscape.
  • It’s a powerful insight. Studies of conversation both in the laboratory and in natural settings show that when two people are talking, the mere presence of a phone on a table between them or in the periphery of their vision changes both what they talk about and the degree of connection they feel. People keep the conversation on topics where they won’t mind being interrupted. They don’t feel as invested in each other. Even a silent phone disconnects us.
  • Yalda T. Uhls was the lead author on a 2014 study of children at a device-free outdoor camp. After five days without phones or tablets, these campers were able to read facial emotions and correctly identify the emotions of actors in videotaped scenes significantly better than a control group. What fostered these new empathic responses? They talked to one another. In conversation, things go best if you pay close attention and learn how to put yourself in someone else’s shoes. This is easier to do without your phone in hand. Conversation is the most human and humanizing thing that we do.
  • At a nightly cabin chat, a group of 14-year-old boys spoke about a recent three-day wilderness hike. Not that many years ago, the most exciting aspect of that hike might have been the idea of roughing it or the beauty of unspoiled nature. These days, what made the biggest impression was being phoneless. One boy called it “time where you have nothing to do but think quietly and talk to your friends.” The campers also spoke about their new taste for life away from the online feed. Their embrace of the virtue of disconnection suggests a crucial connection: The capacity for empathic conversation goes hand in hand with the capacity for solitude.
  • In solitude we find ourselves; we prepare ourselves to come to conversation with something to say that is authentic, ours. If we can’t gather ourselves, we can’t recognize other people for who they are. If we are not content to be alone, we turn others into the people we need them to be. If we don’t know how to be alone, we’ll only know how to be lonely.
  • we have put this virtuous circle in peril. We turn time alone into a problem that needs to be solved with technology.
  • People sometimes say to me that they can see how one might be disturbed when people turn to their phones when they are together. But surely there is no harm when people turn to their phones when they are by themselves? If anything, it’s our new form of being together.
  • But this way of dividing things up misses the essential connection between solitude and conversation. In solitude we learn to concentrate and imagine, to listen to ourselves. We need these skills to be fully present in conversation.
  • One start toward reclaiming conversation is to reclaim solitude. Some of the most crucial conversations you will ever have will be with yourself. Slow down sufficiently to make this possible. And make a practice of doing one thing at a time. Think of unitasking as the next big thing. In every domain of life, it will increase performance and decrease stress.
  • Multitasking comes with its own high, but when we chase after this feeling, we pursue an illusion. Conversation is a human way to practice unitasking.
  • Our phones are not accessories, but psychologically potent devices that change not just what we do but who we are. A second path toward conversation involves recognizing the degree to which we are vulnerable to all that connection offers. We have to commit ourselves to designing our products and our lives to take that vulnerability into account.
  • We can choose not to carry our phones all the time. We can park our phones in a room and go to them every hour or two while we work on other things or talk to other people. We can carve out spaces at home or work that are device-free, sacred spaces for the paired virtues of conversation and solitude.
  • Families can find these spaces in the day to day — no devices at dinner, in the kitchen and in the car.
  • Engineers are ready with more ideas: What if our phones were not designed to keep us attached, but to do a task and then release us? What if the communications industry began to measure the success of devices not by how much time consumers spend on them but by whether it is time well spent?
  • The young woman who is so clear about the seven minutes that it takes to see where a conversation is going admits that she often doesn’t have the patience to wait for anything near that kind of time before going to her phone. In this she is characteristic of what the psychologists Howard Gardner and Katie Davis called the “app generation,” which grew up with phones in hand and apps at the ready. It tends toward impatience, expecting the world to respond like an app, quickly and efficiently. The app way of thinking starts with the idea that actions in the world will work like algorithms: Certain actions will lead to predictable results.
  • This attitude can show up in friendship as a lack of empathy. Friendships become things to manage; you have a lot of them, and you come to them with tools
  • here is a first step: To reclaim conversation for yourself, your friendships and society, push back against viewing the world as one giant app. It works the other way, too: Conversation is the antidote to the algorithmic way of looking at life because it teaches you about fluidity, contingency and personality.
  • We have time to make corrections and remember who we are — creatures of history, of deep psychology, of complex relationships, of conversations, artless, risky and face to face.
Javier E

The Navy's USS Gabrielle Giffords and the Future of Work - The Atlantic - 0 views

  • Minimal manning—and with it, the replacement of specialized workers with problem-solving generalists—isn’t a particularly nautical concept. Indeed, it will sound familiar to anyone in an organization who’s been asked to “do more with less”—which, these days, seems to be just about everyone.
  • Ten years from now, the Deloitte consultant Erica Volini projects, 70 to 90 percent of workers will be in so-called hybrid jobs or superjobs—that is, positions combining tasks once performed by people in two or more traditional roles.
  • If you ask Laszlo Bock, Google’s former culture chief and now the head of the HR start-up Humu, what he looks for in a new hire, he’ll tell you “mental agility.
  • ...40 more annotations...
  • “What companies are looking for,” says Mary Jo King, the president of the National Résumé Writers’ Association, “is someone who can be all, do all, and pivot on a dime to solve any problem.”
  • The phenomenon is sped by automation, which usurps routine tasks, leaving employees to handle the nonroutine and unanticipated—and the continued advance of which throws the skills employers value into flux
  • Or, for that matter, on the relevance of the question What do you want to be when you grow up?
  • By 2020, a 2016 World Economic Forum report predicted, “more than one-third of the desired core skill sets of most occupations” will not have been seen as crucial to the job when the report was published
  • I asked John Sullivan, a prominent Silicon Valley talent adviser, why should anyone take the time to master anything at all? “You shouldn’t!” he replied.
  • Minimal manning—and the evolution of the economy more generally—requires a different kind of worker, with not only different acquired skills but different inherent abilities
  • It has implications for the nature and utility of a college education, for the path of careers, for inequality and employability—even for the generational divide.
  • Then, in 2001, Donald Rumsfeld arrived at the Pentagon. The new secretary of defense carried with him a briefcase full of ideas from the corporate world: downsizing, reengineering, “transformational” technologies. Almost immediately, what had been an experimental concept became an article of faith
  • But once cadets got into actual command environments, which tend to be fluid and full of surprises, a different picture emerged. “Psychological hardiness”—a construct that includes, among other things, a willingness to explore “multiple possible response alternatives,” a tendency to “see all experience as interesting and meaningful,” and a strong sense of self-confidence—was a better predictor of leadership ability in officers after three years in the field.
  • Because there really is no such thing as multitasking—just a rapid switching of attention—I began to feel overstrained, put upon, and finally irked by the impossible set of concurrent demands. Shouldn’t someone be giving me a hand here? This, Hambrick explained, meant I was hitting the limits of working memory—basically, raw processing power—which is an important aspect of “fluid intelligence” and peaks in your early 20s. This is distinct from “crystallized intelligence”—the accumulated facts and know-how on your hard drive—which peaks in your 50
  • Others noticed the change but continued to devote equal attention to all four tasks. Their scores fell. This group, Hambrick found, was high in “conscientiousness”—a trait that’s normally an overwhelming predictor of positive job performance. We like conscientious people because they can be trusted to show up early, double-check the math, fill the gap in the presentation, and return your car gassed up even though the tank was nowhere near empty to begin with. What struck Hambrick as counterintuitive and interesting was that conscientiousness here seemed to correlate with poor performance.
  • he discovered another correlation in his test: The people who did best tended to score high on “openness to new experience”—a personality trait that is normally not a major job-performance predictor and that, in certain contexts, roughly translates to “distractibility.”
  • To borrow the management expert Peter Drucker’s formulation, people with this trait are less focused on doing things right, and more likely to wonder whether they’re doing the right things.
  • High in fluid intelligence, low in experience, not terribly conscientious, open to potential distraction—this is not the classic profile of a winning job candidate. But what if it is the profile of the winning job candidate of the future?
  • One concerns “grit”—a mind-set, much vaunted these days in educational and professional circles, that allows people to commit tenaciously to doing one thing well
  • These ideas are inherently appealing; they suggest that dedication can be more important than raw talent, that the dogged and conscientious will be rewarded in the end.
  • he studied West Point students and graduates.
  • Traditional measures such as SAT scores and high-school class rank “predicted leader performance in the stable, highly regulated environment of West Point” itself.
  • It would be supremely ironic if the advance of the knowledge economy had the effect of devaluing knowledge. But that’s what I heard, recurrentl
  • “Fluid, learning-intensive environments are going to require different traits than classical business environments,” I was told by Frida Polli, a co-founder of an AI-powered hiring platform called Pymetrics. “And they’re going to be things like ability to learn quickly from mistakes, use of trial and error, and comfort with ambiguity.”
  • “We’re starting to see a big shift,” says Guy Halfteck, a people-analytics expert. “Employers are looking less at what you know and more and more at your hidden potential” to learn new things
  • advice to employers? Stop hiring people based on their work experience. Because in these environments, expertise can become an obstacle.
  • “The Curse of Expertise.” The more we invest in building and embellishing a system of knowledge, they found, the more averse we become to unbuilding it.
  • All too often experts, like the mechanic in LePine’s garage, fail to inspect their knowledge structure for signs of decay. “It just didn’t occur to him,” LePine said, “that he was repeating the same mistake over and over.
  • The devaluation of expertise opens up ample room for different sorts of mistakes—and sometimes creates a kind of helplessness.
  • Aboard littoral combat ships, the crew lacks the expertise to carry out some important tasks, and instead has to rely on civilian help
  • Meanwhile, the modular “plug and fight” configuration was not panning out as hoped. Converting a ship from sub-hunter to minesweeper or minesweeper to surface combatant, it turned out, was a logistical nightmare
  • So in 2016 the concept of interchangeability was scuttled for a “one ship, one mission” approach, in which the extra 20-plus sailors became permanent crew members
  • “As equipment breaks, [sailors] are required to fix it without any training,” a Defense Department Test and Evaluation employee told Congress. “Those are not my words. Those are the words of the sailors who were doing the best they could to try to accomplish the missions we gave them in testing.”
  • These results were, perhaps, predictable given the Navy’s initial, full-throttle approach to minimal manning—and are an object lesson on the dangers of embracing any radical concept without thinking hard enough about the downsides
  • a world in which mental agility and raw cognitive speed eclipse hard-won expertise is a world of greater exclusion: of older workers, slower learners, and the less socially adept.
  • if you keep going down this road, you end up with one really expensive ship with just a few people on it who are geniuses … That’s not a future we want to see, because you need a large enough crew to conduct multiple tasks in combat.
  • hat does all this mean for those of us in the workforce, and those of us planning to enter it? It would be wrong to say that the 10,000-hours-of-deliberate-practice idea doesn’t hold up at all. In some situations, it clearly does
  • A spinal surgery will not be performed by a brilliant dermatologist. A criminal-defense team will not be headed by a tax attorney. And in tech, the demand for specialized skills will continue to reward expertise handsomely.
  • But in many fields, the path to success isn’t so clear. The rules keep changing, which means that highly focused practice has a much lower return
  • In uncertain environments, Hambrick told me, “specialization is no longer the coin of the realm.”
  • It leaves us with lifelong learning,
  • I found myself the target of career suggestions. “You need to be a video guy, an audio guy!” the Silicon Valley talent adviser John Sullivan told me, alluding to the demise of print media
  • I found the prospect of starting over just plain exhausting. Building a professional identity takes a lot of resources—money, time, energy. After it’s built, we expect to reap gains from our investment, and—let’s be honest—even do a bit of coasting. Are we equipped to continually return to apprentice mode? Will this burn us out?
  • Everybody I met on the Giffords seemed to share that mentality. They regarded every minute on board—even during a routine transit back to port in San Diego Harbor—as a chance to learn something new.
Javier E

You Have Permission to Be a Smartphone Skeptic - The Bulwark - 0 views

  • the brief return of one of my favorite discursive topics—are the kids all right?—in one of my least-favorite variations: why shouldn’t each of them have a smartphone and tablet?
  • One camp says yes, the kids are fine
  • complaints about screen time merely conceal a desire to punish hard-working parents for marginally benefiting from climbing luxury standards, provide examples of the moral panic occasioned by all new technologies, or mistakenly blame screens for ill effects caused by the general political situation.
  • ...38 more annotations...
  • No, says the other camp, led by Jonathan Haidt; the kids are not all right, their devices are partly to blame, and here are the studies showing why.
  • we should not wait for the replication crisis in the social sciences to resolve itself before we consider the question of whether the naysayers are on to something. And normal powers of observation and imagination should be sufficient to make us at least wary of smartphones.
  • These powerful instruments represent a technological advance on par with that of the power loom or the automobile
  • The achievement can be difficult to properly appreciate because instead of exerting power over physical processes and raw materials, they operate on social processes and the human psyche: They are designed to maximize attention, to make it as difficult as possible to look away.
  • they have transformed the qualitative experience of existing in the world. They give a person’s sociality the appearance and feeling of a theoretically endless open network, while in reality, algorithms quietly sort users into ideological, aesthetic, memetic cattle chutes of content.
  • Importantly, the process by which smartphones change us requires no agency or judgment on the part of a teen user, and yet that process is designed to provide what feels like a perfectly natural, inevitable, and complete experience of the world.
  • Smartphones offer a tactile portal to a novel digital environment, and this environment is not the kind of space you enter and leave
  • One reason commonly offered for maintaining our socio-technological status quo is that nothing really has changed with the advent of the internet, of Instagram, of Tiktok and Youtube and 4Chan
  • It is instead a complete shadow world of endless images; disembodied, manipulable personas; and the ever-present gaze of others. It lives in your pocket and in your mind.
  • The price you pay for its availability—and the engine of its functioning—is that you are always available to it, as well. Unless you have a strength of will that eludes most adults, its emissaries can find you at any hour and in any place to issue your summons to the grim pleasure palace.
  • the self-restraint and self-discipline required to use a smartphone well—that is, to treat it purely as an occasional tool rather than as a totalizing way of life—are unreasonable things to demand of teenagers
  • these are unreasonable things to demand of me, a fully adult woman
  • To enjoy the conveniences that a smartphone offers, I must struggle against the lure of the permanent scroll, the notification, the urge to fix my eyes on the circle of light and keep them fixed. I must resist the default pseudo-activity the smartphone always calls its user back to, if I want to have any hope of filling the moments of my day with the real activity I believe is actually valuable.
  • for a child or teen still learning the rudiments of self-control, still learning what is valuable and fulfilling, still learning how to prioritize what is good over the impulse of the moment, it is an absurd bar to be asked to clear
  • The expectation that children and adolescents will navigate new technologies with fully formed and muscular capacities for reason and responsibility often seems to go along with a larger abdication of responsibility on the part of the adults involved.
  • adults have frequently given in to a Faustian temptation: offering up their children’s generation to be used as guinea pigs in a mass longitudinal study in exchange for a bit more room to breathe in their own undeniably difficult roles as educators, caretakers, and parents.
  • It is not a particular activity that you start and stop and resume, and it is not a social scene that you might abandon when it suits you.
  • And this we must do without waiting for social science to hand us a comprehensive mandate it is fundamentally unable to provide; without cowering in panic over moral panics
  • The pre-internet advertising world was vicious, to be sure, but when the “pre-” came off, its vices were moved into a compound interest account. In the world of online advertising, at any moment, in any place, a user engaged in an infinite scroll might be presented with native content about how one Instagram model learned to accept her chunky (size 4) thighs, while in the next clip, another model relates how a local dermatologist saved her from becoming an unlovable crone at the age of 25
  • developing pathological interests and capacities used to take a lot more work than it does now
  • You had to seek it out, as you once had to seek out pornography and look someone in the eye while paying for it. You were not funneled into it by an omnipresent stream of algorithmically curated content—the ambience of digital life, so easily mistaken by the person experiencing it as fundamentally similar to the non-purposive ambience of the natural world.
  • And when interpersonal relations between teens become sour, nasty, or abusive, as they often do and always have, the unbalancing effects of transposing social life to the internet become quite clear
  • For both young men and young women, the pornographic scenario—dominance and degradation, exposure and monetization—creates an experiential framework for desires that they are barely experienced enough to understand.
  • This is not a world I want to live in. I think it hurts everyone; but I especially think it hurts those young enough to receive it as a natural state of affairs rather than as a profound innovation.
  • so I am baffled by the most routine objection to any blaming of smartphones for our society-wide implosion of teenagers’ mental health,
  • In short, and inevitably, today’s teenagers are suffering from capitalism—specifically “late capitalism,
  • what shocks me about this rhetorical approach is the rush to play defense for Apple and its peers, the impulse to wield the abstract concept of capitalism as a shield for actually existing, extremely powerful, demonstrably ruthless capitalist actors.
  • This motley alliance of left-coded theory about the evils of business and right-coded praxis in defense of a particular evil business can be explained, I think, by a deeper desire than overthrowing capitalism. It is the desire not to be a prude or hysteric of bumpkin
  • No one wants to come down on the side of tamping off pleasures and suppressing teen activity.
  • No one wants to be the shrill or leaden antagonist of a thousand beloved movies, inciting moral panics, scheming about how to stop the youths from dancing on Sunday.
  • But commercial pioneers are only just beginning to explore new frontiers in the profit-driven, smartphone-enabled weaponization of our own pleasures against us
  • To limit your moral imagination to the archetypes of the fun-loving rebel versus the stodgy enforcers in response to this emerging reality is to choose to navigate it with blinders on, to be a useful idiot for the robber barons of online life rather than a challenger to the corrupt order they maintain.
  • The very basic question that needs to be asked with every product rollout and implementation is what technologies enable a good human life?
  • this question is not, ultimately, the province of social scientists, notwithstanding how useful their work may be on the narrower questions involved. It is the free privilege, it is the heavy burden, for all of us, to think—to deliberate and make judgments about human good, about what kind of world we want to live in, and to take action according to that thought.
  • I am not sure how to build a world in which childrens and adolescents, at least, do not feel they need to live their whole lives online.
  • whatever particular solutions emerge from our negotiations with each other and our reckonings with the force of cultural momentum, they will remain unavailable until we give ourselves permission to set the terms of our common life.
  • But the environments in which humans find themselves vary significantly, and in ways that have equally significant downstream effects on the particular expression of human nature in that context.
  • most of all, without affording Apple, Facebook, Google, and their ilk the defensive allegiance we should reserve for each other.
Javier E

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
Javier E

The Flight From Conversation - NYTimes.com - 0 views

  • we have sacrificed conversation for mere connection.
  • the little devices most of us carry around are so powerful that they change not only what we do, but also who we are.
  • A businessman laments that he no longer has colleagues at work. He doesn’t stop by to talk; he doesn’t call. He says that he doesn’t want to interrupt them. He says they’re “too busy on their e-mail.”
  • ...19 more annotations...
  • We want to customize our lives. We want to move in and out of where we are because the thing we value most is control over where we focus our attention. We have gotten used to the idea of being in a tribe of one, loyal to our own party.
  • We are tempted to think that our little “sips” of online connection add up to a big gulp of real conversation. But they don’t.
  • “Someday, someday, but certainly not now, I’d like to learn how to have a conversation.”
  • We can’t get enough of one another if we can use technology to keep one another at distances we can control: not too close, not too far, just right. I think of it as a Goldilocks effect. Texting and e-mail and posting let us present the self we want to be. This means we can edit. And if we wish to, we can delete. Or retouch: the voice, the flesh, the face, the body. Not too much, not too little — just right.
  • Human relationships are rich; they’re messy and demanding. We have learned the habit of cleaning them up with technology.
  • I have often heard the sentiment “No one is listening to me.” I believe this feeling helps explain why it is so appealing to have a Facebook page or a Twitter feed — each provides so many automatic listeners. And it helps explain why — against all reason — so many of us are willing to talk to machines that seem to care about us. Researchers around the world are busy inventing sociable robots, designed to be companions to the elderly, to children, to all of us.
  • Connecting in sips may work for gathering discrete bits of information or for saying, “I am thinking about you.” Or even for saying, “I love you.” But connecting in sips doesn’t work as well when it comes to understanding and knowing one another. In conversation we tend to one another.
  • We can attend to tone and nuance. In conversation, we are called upon to see things from another’s point of view.
  • I’m the one who doesn’t want to be interrupted. I think I should. But I’d rather just do things on my BlackBerry.
  • And we use conversation with others to learn to converse with ourselves. So our flight from conversation can mean diminished chances to learn skills of self-reflection
  • we have little motivation to say something truly self-reflective. Self-reflection in conversation requires trust. It’s hard to do anything with 3,000 Facebook friends except connect.
  • we seem almost willing to dispense with people altogether. Serious people muse about the future of computer programs as psychiatrists. A high school sophomore confides to me that he wishes he could talk to an artificial intelligence program instead of his dad about dating; he says the A.I. would have so much more in its database. Indeed, many people tell me they hope that as Siri, the digital assistant on Apple’s iPhone, becomes more advanced, “she” will be more and more like a best friend — one who will listen when others won’t.
  • FACE-TO-FACE conversation unfolds slowly. It teaches patience. When we communicate on our digital devices, we learn different habits. As we ramp up the volume and velocity of online connections, we start to expect faster answers. To get these, we ask one another simpler questions; we dumb down our communications, even on the most important matters.
  • WE expect more from technology and less from one another and seem increasingly drawn to technologies that provide the illusion of companionship without the demands of relationship. Always-on/always-on-you devices provide three powerful fantasies: that we will always be heard; that we can put our attention wherever we want it to be; and that we never have to be alone. Indeed our new devices have turned being alone into a problem that can be solved.
  • When people are alone, even for a few moments, they fidget and reach for a device. Here connection works like a symptom, not a cure, and our constant, reflexive impulse to connect shapes a new way of being.
  • Think of it as “I share, therefore I am.” We use technology to define ourselves by sharing our thoughts and feelings as we’re having them. We used to think, “I have a feeling; I want to make a call.” Now our impulse is, “I want to have a feeling; I need to send a text.”
  • Lacking the capacity for solitude, we turn to other people but don’t experience them as they are. It is as though we use them, need them as spare parts to support our increasingly fragile selves.
  • If we are unable to be alone, we are far more likely to be lonely. If we don’t teach our children to be alone, they will know only how to be lonely.
  • I am a partisan for conversation. To make room for it, I see some first, deliberate steps. At home, we can create sacred spaces: the kitchen, the dining room. We can make our cars “device-free zones.”
Javier E

There's No Such Thing As 'Sound Science' | FiveThirtyEight - 1 views

  • cience is being turned against itself. For decades, its twin ideals of transparency and rigor have been weaponized by those who disagree with results produced by the scientific method. Under the Trump administration, that fight has ramped up again.
  • The same entreaties crop up again and again: We need to root out conflicts. We need more precise evidence. What makes these arguments so powerful is that they sound quite similar to the points raised by proponents of a very different call for change that’s coming from within science.
  • Despite having dissimilar goals, the two forces espouse principles that look surprisingly alike: Science needs to be transparent. Results and methods should be openly shared so that outside researchers can independently reproduce and validate them. The methods used to collect and analyze data should be rigorous and clear, and conclusions must be supported by evidence.
  • ...26 more annotations...
  • they’re also used as talking points by politicians who are working to make it more difficult for the EPA and other federal agencies to use science in their regulatory decision-making, under the guise of basing policy on “sound science.” Science’s virtues are being wielded against it.
  • What distinguishes the two calls for transparency is intent: Whereas the “open science” movement aims to make science more reliable, reproducible and robust, proponents of “sound science” have historically worked to amplify uncertainty, create doubt and undermine scientific discoveries that threaten their interests.
  • “Our criticisms are founded in a confidence in science,” said Steven Goodman, co-director of the Meta-Research Innovation Center at Stanford and a proponent of open science. “That’s a fundamental difference — we’re critiquing science to make it better. Others are critiquing it to devalue the approach itself.”
  • alls to base public policy on “sound science” seem unassailable if you don’t know the term’s history. The phrase was adopted by the tobacco industry in the 1990s to counteract mounting evidence linking secondhand smoke to cancer.
  • The sound science tactic exploits a fundamental feature of the scientific process: Science does not produce absolute certainty. Contrary to how it’s sometimes represented to the public, science is not a magic wand that turns everything it touches to truth. Instead, it’s a process of uncertainty reduction, much like a game of 20 Questions.
  • Any given study can rarely answer more than one question at a time, and each study usually raises a bunch of new questions in the process of answering old ones. “Science is a process rather than an answer,” said psychologist Alison Ledgerwood of the University of California, Davis. Every answer is provisional and subject to change in the face of new evidence. It’s not entirely correct to say that “this study proves this fact,” Ledgerwood said. “We should be talking instead about how science increases or decreases our confidence in something.”
  • While insisting that they merely wanted to ensure that public policy was based on sound science, tobacco companies defined the term in a way that ensured that no science could ever be sound enough. The only sound science was certain science, which is an impossible standard to achieve.
  • “Doubt is our product,” wrote one employee of the Brown & Williamson tobacco company in a 1969 internal memo. The note went on to say that doubt “is the best means of competing with the ‘body of fact’” and “establishing a controversy.” These strategies for undermining inconvenient science were so effective that they’ve served as a sort of playbook for industry interests ever since
  • Doubt merchants aren’t pushing for knowledge, they’re practicing what Proctor has dubbed “agnogenesis” — the intentional manufacture of ignorance. This ignorance isn’t simply the absence of knowing something; it’s a lack of comprehension deliberately created by agents who don’t want you to know,
  • In the hands of doubt-makers, transparency becomes a rhetorical move. “It’s really difficult as a scientist or policy maker to make a stand against transparency and openness, because well, who would be against it?
  • But at the same time, “you can couch everything in the language of transparency and it becomes a powerful weapon.” For instance, when the EPA was preparing to set new limits on particulate pollution in the 1990s, industry groups pushed back against the research and demanded access to primary data (including records that researchers had promised participants would remain confidential) and a reanalysis of the evidence. Their calls succeeded and a new analysis was performed. The reanalysis essentially confirmed the original conclusions, but the process of conducting it delayed the implementation of regulations and cost researchers time and money.
  • Delay is a time-tested strategy. “Gridlock is the greatest friend a global warming skeptic has,” said Marc Morano, a prominent critic of global warming research
  • which has received funding from the oil and gas industry. “We’re the negative force. We’re just trying to stop stuff.”
  • these ploys are getting a fresh boost from Congress. The Data Quality Act (also known as the Information Quality Act) was reportedly written by an industry lobbyist and quietly passed as part of an appropriations bill in 2000. The rule mandates that federal agencies ensure the “quality, objectivity, utility, and integrity of information” that they disseminate, though it does little to define what these terms mean. The law also provides a mechanism for citizens and groups to challenge information that they deem inaccurate, including science that they disagree with. “It was passed in this very quiet way with no explicit debate about it — that should tell you a lot about the real goals,” Levy said.
  • in the 20 months following its implementation, the act was repeatedly used by industry groups to push back against proposed regulations and bog down the decision-making process. Instead of deploying transparency as a fundamental principle that applies to all science, these interests have used transparency as a weapon to attack very particular findings that they would like to eradicate.
  • Now Congress is considering another way to legislate how science is used. The Honest Act, a bill sponsored by Rep. Lamar Smith of Texas,3The bill has been passed by the House but still awaits a vote in the Senate. is another example of what Levy calls a “Trojan horse” law that uses the language of transparency as a cover to achieve other political goals. Smith’s legislation would severely limit the kind of evidence the EPA could use for decision-making. Only studies whose raw data and computer codes were publicly available would be allowed for consideration.
  • It might seem like an easy task to sort good science from bad, but in reality it’s not so simple. “There’s a misplaced idea that we can definitively distinguish the good from the not-good science, but it’s all a matter of degree,” said Brian Nosek, executive director of the Center for Open Science. “There is no perfect study.” Requiring regulators to wait until they have (nonexistent) perfect evidence is essentially “a way of saying, ‘We don’t want to use evidence for our decision-making,’
  • ost scientific controversies aren’t about science at all, and once the sides are drawn, more data is unlikely to bring opponents into agreement.
  • objective knowledge is not enough to resolve environmental controversies. “While these controversies may appear on the surface to rest on disputed questions of fact, beneath often reside differing positions of value; values that can give shape to differing understandings of what ‘the facts’ are.” What’s needed in these cases isn’t more or better science, but mechanisms to bring those hidden values to the forefront of the discussion so that they can be debated transparently. “As long as we continue down this unabashedly naive road about what science is, and what it is capable of doing, we will continue to fail to reach any sort of meaningful consensus on these matters,”
  • The dispute over tobacco was never about the science of cigarettes’ link to cancer. It was about whether companies have the right to sell dangerous products and, if so, what obligations they have to the consumers who purchased them.
  • Similarly, the debate over climate change isn’t about whether our planet is heating, but about how much responsibility each country and person bears for stopping it
  • While researching her book “Merchants of Doubt,” science historian Naomi Oreskes found that some of the same people who were defending the tobacco industry as scientific experts were also receiving industry money to deny the role of human activity in global warming. What these issues had in common, she realized, was that they all involved the need for government action. “None of this is about the science. All of this is a political debate about the role of government,”
  • These controversies are really about values, not scientific facts, and acknowledging that would allow us to have more truthful and productive debates. What would that look like in practice? Instead of cherry-picking evidence to support a particular view (and insisting that the science points to a desired action), the various sides could lay out the values they are using to assess the evidence.
  • For instance, in Europe, many decisions are guided by the precautionary principle — a system that values caution in the face of uncertainty and says that when the risks are unclear, it should be up to industries to show that their products and processes are not harmful, rather than requiring the government to prove that they are harmful before they can be regulated. By contrast, U.S. agencies tend to wait for strong evidence of harm before issuing regulations
  • the difference between them comes down to priorities: Is it better to exercise caution at the risk of burdening companies and perhaps the economy, or is it more important to avoid potential economic downsides even if it means that sometimes a harmful product or industrial process goes unregulated?
  • But science can’t tell us how risky is too risky to allow products like cigarettes or potentially harmful pesticides to be sold — those are value judgements that only humans can make.
anonymous

Can you trust your earliest childhood memories? - BBC Future - 1 views

  • The moments we remember from the first years of our lives are often our most treasured because we have carried them longest. The chances are, they are also completely made up.
  • Around four out of every 10 of us have fabricated our first memory, according to researchers. This is thought to be because our brains do not develop the ability to store autobiographical memories at least until we reach two years old.
  • Yet a surprising number of us have some flicker of memory from before that age
  • ...23 more annotations...
  • Experts have managed to turn people off all sorts of foods by convincing them it had made them ill when they were a child
  • “People have a life story, particularly as they get older and for some people it needs to stretch back to the very early stage of life,”
  • The prevailing account of how we come to believe and remember things is based around the concept of source monitoring. “Every time a thought comes to mind we have to make a decision – have we experienced it [an event], imagined it or have we talked about it with other people,” says Kimberley Wade
  • Most of the time we make that decision correctly and can identify where these mental experiences come from, but sometimes we get it wrong.
  • Wade admits she has spent a lot of time recalling an event that was actually something her brother experienced rather than herself, but despite this, it is rich in detail and provokes emotion
  • Memory researchers have shown it is possible to induce fictional autobiographical memories in volunteers, including accounts of getting lost in a shopping mall and even having tea with a member of the Royal Family
  • Based on my research, everybody is capable of forming complex false memories, given the right circumstances – Julia Shaw
  • In some situations, such as after looking at pictures or a video, children are more susceptible to forming false memories than adults. People with certain personality types are also thought to be more prone.
  • But carrying around false memories from your childhood could be having a far greater impact on you than you may realise too. The events, emotions and experiences we remember from our early years can help to shape who we are as adults, determining our likes, dislikes, fears and even our behaviour.
  • Memories before the age of three are more than likely to be false. Any that appear very fluid and detailed, as if you were playing back a home video and experiencing a chronological account of a memory, could well also be made up. It is more likely that fuzzy fragments, or snapshots of moments are real, as long as they are not from too early in your life.
  • We crave a cohesive narrative of our own existence and will even invent stories to give us a more complete picture
  • Interestingly, scientists have also found positive suggestions, such as “you loved asparagus the first time you ate it” tend to be more effective than negative suggestions like “you got sick drinking vodka”
  • “Miscarriage of justice, incarceration, loss of reputation, job and status, and family breakdown occur,
  • One of the major problems with legal cases involving false memories, is that it is currently impossible to distinguish between true and fictional recollections
  • Efforts have been made to analyse minor false memories in a brain scanner (fMRI) and detect different neurological patterns, but there is nothing as yet to indicate that this technology can be used to detect whether recollections have become distorted.
  • the most extreme case of memory implantation involves a controversial technique called “regression therapy”, where patients confront childhood traumas, supposedly buried in their subconscious
  • “Memories are malleable and tend to change slightly each time we revisit them, in the same way that spoken stories do,”
  • “Therefore at each recollection, new elements can easily be integrated while existing elements can be altered or lost.”
  • This is not to say that all evidence that relies on memory should be discarded or regarded as unreliable – they often provide the most compelling testimony in criminal cases. But it has led to rules and guidelines about how witnesses and victims should be questioned to ensure their recollections of an event or perpetrator are not contaminated by investigators or prosecutors.
  • Any memories that appear very fluid and detailed, as if you were playing back a home video, could well also be made up
  • While this may seem like a bit of fun, many scientists believe the “false memory diet” could be used to tackle obesity and encourage people to reach for healthier options like asparagus, or even help cut people’s alcohol consumption.
  • Children are more susceptible to forming false memories than adults, especially after looking at photographs or films
  • And we may not want to rid ourselves of these memories. Our memories, whether fictional or not, can help to bring us closer together.
  •  
    This is a great and very detailed article about memory and how we change our own memories and are impacted by this change.
Javier E

Opinion | Reflections on Stephen L. Carter's 1991 Book, 'Reflections of an Affirmative ... - 0 views

  • In 1991, Stephen L. Carter, a professor at Yale Law School, began his book “Reflections of an Affirmative Action Baby” with a discomfiting anecdote. A fellow professor had criticized one of Carter’s papers because it “showed a lack of sensitivity to the experience of Black people in America.”
  • “I live in a box,” he wrote, one bearing all kinds of labels, including “Careful: Discuss Civil Rights Law or Law and Race Only” and “Warning! Affirmative Action Baby! Do Not Assume That This Individual Is Qualified!”
  • The diversity argument holds that people of different races benefit from one another’s presence, which sounds desirable on its face
  • ...17 more annotations...
  • The fact that Thomas was very likely nominated because he was Black and because he opposed affirmative action posed a conundrum for many supporters of racial preferences. Was being Black enough? Or did you have to be “the right kind” of Black person? It’s a question Carter openly wrestles with in his book.
  • What immediately struck me on rereading it was how prescient Carter was about these debates 32 years ago. What role affirmative action should take was playing out then in ways that continue to reverberate.
  • The demise of affirmative action, in Carter’s view, was both necessary and inevitable. “We must reject the common claim that an end to preferences ‘would be a disastrous situation, amounting to a virtual nullification of the 1954 desegregation ruling,’” he wrote, quoting the activist and academic Robert Allen. “The prospect of its end should be a challenge and a chance.”
  • Like many people today — both proponents and opponents of affirmative action — he expressed reservations about relying on diversity as the constitutional basis for racial preferences.
  • Carter bristled at the judgment of many of his Black peers, describing several situations in which he found himself accused of being “inauthentically” Black, as if people of a particular race were a monolith and that those who deviated from it were somehow shirking their duty. He said he didn’t want to be limited in what he was allowed to say by “an old and vicious form of silencing.”
  • But the implication of recruiting for diversity, Carter explained, had less to do with admitting Black students to redress past discrimination and more to do with supporting and reinforcing essentialist notions about Black people.
  • An early critic of groupthink, Carter warned against “the idea that Black people who gain positions of authority or influence are vested a special responsibility to articulate the presumed views of other people who are Black — in effect, to think and act and speak in a particular way, the Black way — and that there is something peculiar about Black people who insist on doing anything else.”
  • A graduate of Stanford and Yale Law, Carter was a proud beneficiary of affirmative action. Yet he acknowledged the personal toll it took (“a decidedly mixed blessing”) as well as affirmative action’s sometimes troubling effects on Black people as the programs evolved.
  • , it’s hard to imagine Carter welcoming the current vogue for white allyship, with its reductive assumption that all Black people have the same interests and values
  • He disparaged what he called “the peculiar relationship between Black intellectuals and the white ones who seem loath to criticize us for fear of being branded racists — which is itself a mark of racism of a sort.”
  • In the past, such ideas might have been seen as “frankly racist,” Carter noted. “Now, however, they are almost a gospel for people who want to show their commitment to equality.”
  • Carter took issue with the belief, now practically gospel in academic, cultural and media circles, that heightened race consciousness would be central to overcoming racism
  • However well intentioned you may be, when you reduce people to their race-based identity rather than view them as individuals in their full, complex humanity, you risk making sweeping assumptions about who they are. This used to be called stereotyping or racism.
  • he rejected all efforts to label him, insisting that intellectuals should be “politically unpredictable.
  • “Critics who attempt to push (or pull) Carter into the ranks of the Black right wing will be making a mistake. He is not a conservative, neo- or otherwise. He is an honest Black scholar — the product of the pre-politically correct era — who abhors the stifling of debate by either wing or by people of any hue.”
  • This strikes me as the greatest difference between reading the book today and reading it as an undergrad at a liberal Ivy League college: the attitude toward debating controversial views. “Reflections” offers a vigorous and unflinching examination of ideas, something academia, media and the arts still prized in 1991.
  • Today, a kind of magical thinking has seized ideologues on both the left and the right, who seem to believe that stifling debate on difficult questions will make them go away
Javier E

Love People, Not Pleasure - NYTimes.com - 0 views

  • Fame, riches and pleasure beyond imagination. Sound great? He went on to write:“I have diligently numbered the days of pure and genuine happiness which have fallen to my lot: They amount to 14.”Abd al-Rahman’s problem wasn’t happiness, as he believed — it was unhappiness
  • Happiness and unhappiness are certainly related, but they are not actually opposites.
  • Circumstances are certainly important. No doubt Abd al-Rahman could point to a few in his life. But paradoxically, a better explanation for his unhappiness may have been his own search for well-being. And the same might go for you.Continue reading the main story
  • ...22 more annotations...
  • As strange as it seems, being happier than average does not mean that one can’t also be unhappier than average.
  • In 2009, researchers from the University of Rochester conducted a study tracking the success of 147 recent graduates in reaching their stated goals after graduation. Some had “intrinsic” goals, such as deep, enduring relationships. Others had “extrinsic” goals, such as achieving reputation or fame. The scholars found that intrinsic goals were associated with happier lives. But the people who pursued extrinsic goals experienced more negative emotions, such as shame and fear. They even suffered more physical maladies.
  • the paradox of fame. Just like drugs and alcohol, once you become addicted, you can’t live without it. But you can’t live with it, either.
  • That impulse to fame by everyday people has generated some astonishing innovations.
  • Today, each of us can build a personal little fan base, thanks to Facebook, YouTube, Twitter and the like. We can broadcast the details of our lives to friends and strangers in an astonishingly efficient way. That’s good for staying in touch with friends, but it also puts a minor form of fame-seeking within each person’s reach. And several studies show that it can make us unhappy.
  • It makes sense. What do you post to Facebook? Pictures of yourself yelling at your kids, or having a hard time at work? No, you post smiling photos of a hiking trip with friends. You build a fake life — or at least an incomplete one — and share it. Furthermore, you consume almost exclusively the fake lives of your social media “friends.” Unless you are extraordinarily self-aware, how could it not make you feel worse to spend part of your time pretending to be happier than you are, and the other part of your time seeing how much happier others seem to be than you?Continue reading the main story
  • the bulk of the studies point toward the same important conclusion: People who rate materialistic goals like wealth as top personal priorities are significantly likelier to be more anxious, more depressed and more frequent drug users, and even to have more physical ailments than those who set their sights on more intrinsic values.
  • as the Dalai Lama pithily suggests, it is better to want what you have than to have what you want.
  • In 2004, two economists looked into whether more sexual variety led to greater well-being. They looked at data from about 16,000 adult Americans who were asked confidentially how many sex partners they had had in the preceding year, and about their happiness. Across men and women alike, the data show that the optimal number of partners is one.
  • This might seem totally counterintuitive. After all, we are unambiguously driven to accumulate material goods, to seek fame, to look for pleasure. How can it be that these very things can give us unhappiness instead of happiness? There are two explanations, one biological and the other philosophical.
  • From an evolutionary perspective, it makes sense that we are wired to seek fame, wealth and sexual variety. These things make us more likely to pass on our DNA.
  • here’s where the evolutionary cables have crossed: We assume that things we are attracted to will relieve our suffering and raise our happiness.
  • that is Mother Nature’s cruel hoax. She doesn’t really care either way whether you are unhappy — she just wants you to want to pass on your genetic material. If you conflate intergenerational survival with well-being, that’s your problem, not nature’s.
  • More philosophically, the problem stems from dissatisfaction — the sense that nothing has full flavor, and we want more. We can’t quite pin down what it is that we seek. Without a great deal of reflection and spiritual hard work, the likely candidates seem to be material things, physical pleasures or favor among friends and strangers.
  • We look for these things to fill an inner emptiness. They may bring a brief satisfaction, but it never lasts, and it is never enough. And so we crave more.
  • This search for fame, the lust for material things and the objectification of others — that is, the cycle of grasping and craving — follows a formula that is elegant, simple and deadly:Love things, use people.
  • This was Abd al-Rahman’s formula as he sleepwalked through life. It is the worldly snake oil peddled by the culture makers from Hollywood to Madison Avenue.
  • Simply invert the deadly formula and render it virtuous:Love people, use things.
  • It requires the courage to repudiate pride and the strength to love others — family, friends, colleagues, acquaintances, God and even strangers and enemies. Only deny love to things that actually are objects. The practice that achieves this is charity. Few things are as liberating as giving away to others that which we hold dear.
  • This also requires a condemnation of materialism.
  • Finally, it requires a deep skepticism of our own basic desires. Of course you are driven to seek admiration, splendor and physical license.
  • Declaring war on these destructive impulses is not about asceticism or Puritanism. It is about being a prudent person who seeks to avoid unnecessary suffering.
Javier E

Opinion | How Genetics Is Changing Our Understanding of 'Race' - The New York Times - 0 views

  • In 1942, the anthropologist Ashley Montagu published “Man’s Most Dangerous Myth: The Fallacy of Race,” an influential book that argued that race is a social concept with no genetic basis.
  • eginning in 1972, genetic findings began to be incorporated into this argument. That year, the geneticist Richard Lewontin published an important study of variation in protein types in blood. He grouped the human populations he analyzed into seven “races” — West Eurasians, Africans, East Asians, South Asians, Native Americans, Oceanians and Australians — and found that around 85 percent of variation in the protein types could be accounted for by variation within populations and “races,” and only 15 percent by variation across them. To the extent that there was variation among humans, he concluded, most of it was because of “differences between individuals.”
  • In this way, a consensus was established that among human populations there are no differences large enough to support the concept of “biological race.” Instead, it was argued, race is a “social construct,” a way of categorizing people that changes over time and across countries.
  • ...29 more annotations...
  • t is true that race is a social construct. It is also true, as Dr. Lewontin wrote, that human populations “are remarkably similar to each other” from a genetic point of view.
  • this consensus has morphed, seemingly without questioning, into an orthodoxy. The orthodoxy maintains that the average genetic differences among people grouped according to today’s racial terms are so trivial when it comes to any meaningful biological traits that those differences can be ignored.
  • With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real.
  • I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”
  • Groundbreaking advances in DNA sequencing technology have been made over the last two decades
  • Care.
  • The orthodoxy goes further, holding that we should be anxious about any research into genetic differences among populations
  • You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work
  • I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science.
  • I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.
  • This is why it is important, even urgent, that we develop a candid and scientifically up-to-date way of discussing any such difference
  • While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another
  • Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly.
  • Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases.
  • in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century.
  • consider what kinds of voices are filling the void that our silence is creating
  • Nicholas Wade, a longtime science journalist for The New York Times, rightly notes in his 2014 book, “A Troublesome Inheritance: Genes, Race and Human History,” that modern research is challenging our thinking about the nature of human population differences. But he goes on to make the unfounded and irresponsible claim that this research is suggesting that genetic factors explain traditional stereotypes.
  • 139 geneticists (including myself) pointed out in a letter to The New York Times about Mr. Wade’s book, there is no genetic evidence to back up any of the racist stereotypes he promotes.
  • Another high-profile example is James Watson, the scientist who in 1953 co-discovered the structure of DNA, and who was forced to retire as head of the Cold Spring Harbor Laboratories in 2007 after he stated in an interview — without any scientific evidence — that research has suggested that genetic factors contribute to lower intelligence in Africans than in Europeans.
  • What makes Dr. Watson’s and Mr. Wade’s statements so insidious is that they start with the accurate observation that many academics are implausibly denying the possibility of average genetic differences among human populations, and then end with a claim — backed by no evidence — that they know what those differences are and that they correspond to racist stereotypes
  • They use the reluctance of the academic community to openly discuss these fraught issues to provide rhetorical cover for hateful ideas and old racist canards.
  • This is why knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public and we actively contribute to the distrust of expertise that is now so prevalent.
  • If scientists can be confident of anything, it is that whatever we currently believe about the genetic nature of differences among populations is most likely wrong.
  • For example, my laboratory discovered in 2016, based on our sequencing of ancient human genomes, that “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead, “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today.
  • For me, a natural response to the challenge is to learn from the example of the biological differences that exist between males and females
  • The differences between the sexes are far more profound than those that exist among human populations, reflecting more than 100 million years of evolution and adaptation. Males and females differ by huge tracts of genetic material
  • How do we accommodate the biological differences between men and women? I think the answer is obvious: We should both recognize that genetic differences between males and females exist and we should accord each sex the same freedoms and opportunities regardless of those differences
  • fulfilling these aspirations in practice is a challenge. Yet conceptually it is straightforward.
  • Compared with the enormous differences that exist among individuals, differences among populations are on average many times smaller, so it should be only a modest challenge to accommodate a reality in which the average genetic contributions to human traits differ.
anonymous

It's OK to Feel Joy Right Now - The New York Times - 0 views

  • It’s OK to Feel Joy Right NowHere’s how to prolong it.
  • The birds are chirping, a warm breeze is blowing and some of your friends are getting vaccinated.
  • After a year of anxiety and stress, many of us are rediscovering what optimism feels like.
  • ...33 more annotations...
  • Spring is the season of optimism. With it comes more natural light and warm weather, both great mood boosters
  • Yes, receiving your vaccine shot, daydreaming about intimate dinner parties or those first hugs with grandchildren may give you a jolt of joy, but euphoria, unfortunately, tends to be fleeting.
  • When good (or bad) things happen, we feel an initial surge or dip in our overall happiness levels.
  • Hedonic adaptation means that, over time, we settle back into wherever we were happiness-wise before that good or bad event happened.
  • ven if the good thing — like getting your dream job — is continuing.
  • To maintain those positive feelings, you are going to need to work on it a bit
  • Thank evolution.
  • “Our brains developed biologically for survival, not happiness,”
  • ven the mundane things — like watching yet another youth soccer game — can feel special if you take a moment to remember the not-so-distant past when so much of our lives was put on hold.
  • While many Americans are beginning to exhale, many others are buried deep in grief.
  • If you’re not allowing yourself to feel happy because you worry you’ll be disappointed by future bad news, that’s OK too, Dr. Owens said.
  • This is called defensive pessimism, and it can help people feel more in control of a bad situation.
  • it’s understandable if you are just not ready to feel optimistic yet
  • Savor this (and everything).
  • Your first time hugging friends in a year is going to be so sweet, you’ll undoubtedly savor every moment of it. But there is joy in everyday things, too
  • To start, it’s OK if you’re not OK.
  • Marvel as much as you can.
  • This feeling can come from a walk around the block, said Allen Klein, author of “The Awe Factor.” One of his favorite strategies for ensuring his daily dose of awe is heading out for an “awe walk.”
  • On these strolls, he’ll turn off his mental list of chores and things to remember, and instead focus on finding wonder in small things along the way.
  • Be grateful and kind.
  • Acts of kindness tend to increase people’s ratings of their happiness,
  • The boost you get may not be huge, however
  • University of California, Riverside, found reflecting on past kind deeds improved well-being at a rate similar to actually going out and doing new good deeds.
  • This isn’t clearance to never be kind again, though. But if you’re stuck at home and cannot get out to help a friend, try thinking back on a time when you did those things.
  • Realize happiness alone isn’t enough.
  • If you have been struggling with depression throughout the pandemic — as many Americans have — working to boost your own happiness may not be the cure you are hoping for
  • “The opposite of depression is not happiness,”
  • “The opposite of depression is no longer being depressed.”
  • If you have been struggling with symptoms of depression these past 12 months, you may feel your depression subside as the pandemic slowly wanes. It may not.
  • Clinical depression should be treated by a mental health professional.
  • Break out your calendar.
  • Perhaps it’s too early to set a date for that 15-person dinner party, but you certainly can crack open a cookbook to start planning the menu.
  • And when party day arrives, don’t forget to savor every last morsel and belly laugh, as you eat, drink and be more than just fleetingly merry.
Javier E

The Disease Detective - The New York Times - 1 views

  • What’s startling is how many mystery infections still exist today.
  • More than a third of acute respiratory illnesses are idiopathic; the same is true for up to 40 percent of gastrointestinal disorders and more than half the cases of encephalitis (swelling of the brain).
  • Up to 20 percent of cancers and a substantial portion of autoimmune diseases, including multiple sclerosis and rheumatoid arthritis, are thought to have viral triggers, but a vast majority of those have yet to be identified.
  • ...34 more annotations...
  • Globally, the numbers can be even worse, and the stakes often higher. “Say a person comes into the hospital in Sierra Leone with a fever and flulike symptoms,” DeRisi says. “After a few days, or a week, they die. What caused that illness? Most of the time, we never find out. Because if the cause isn’t something that we can culture and test for” — like hepatitis, or strep throat — “it basically just stays a mystery.”
  • It would be better, DeRisi says, to watch for rare cases of mystery illnesses in people, which often exist well before a pathogen gains traction and is able to spread.
  • Based on a retrospective analysis of blood samples, scientists now know that H.I.V. emerged nearly a dozen times over a century, starting in the 1920s, before it went global.
  • Zika was a relatively harmless illness before a single mutation, in 2013, gave the virus the ability to enter and damage brain cells.
  • The beauty of this approach” — running blood samples from people hospitalized all over the world through his system, known as IDseq — “is that it works even for things that we’ve never seen before, or things that we might think we’ve seen but which are actually something new.”
  • In this scenario, an undiscovered or completely new virus won’t trigger a match but will instead be flagged. (Even in those cases, the mystery pathogen will usually belong to a known virus family: coronaviruses, for instance, or filoviruses that cause hemorrhagic fevers like Ebola and Marburg.)
  • And because different types of bacteria require specific conditions in order to grow, you also need some idea of what you’re looking for in order to find it.
  • The same is true of genomic sequencing, which relies on “primers” designed to match different combinations of nucleotides (the building blocks of DNA and RNA).
  • Even looking at a slide under a microscope requires staining, which makes organisms easier to see — but the stains used to identify bacteria and parasites, for instance, aren’t the same.
  • The practice that DeRisi helped pioneer to skirt this problem is known as metagenomic sequencing
  • Unlike ordinary genomic sequencing, which tries to spell out the purified DNA of a single, known organism, metagenomic sequencing can be applied to a messy sample of just about anything — blood, mud, seawater, snot — which will often contain dozens or hundreds of different organisms, all unknown, and each with its own DNA. In order to read all the fragmented genetic material, metagenomic sequencing uses sophisticated software to stitch the pieces together by matching overlapping segments.
  • The assembled genomes are then compared against a vast database of all known genomic sequences — maintained by the government-run National Center for Biotechnology Information — making it possible for researchers to identify everything in the mix
  • Traditionally, the way that scientists have identified organisms in a sample is to culture them: Isolate a particular bacterium (or virus or parasite or fungus); grow it in a petri dish; and then examine the result under a microscope, or use genomic sequencing, to understand just what it is. But because less than 2 percent of bacteria — and even fewer viruses — can be grown in a lab, the process often reveals only a tiny fraction of what’s actually there. It’s a bit like planting 100 different kinds of seeds that you found in an old jar. One or two of those will germinate and produce a plant, but there’s no way to know what the rest might have grown into.
  • Such studies have revealed just how vast the microbial world is, and how little we know about it
  • “The selling point for researchers is: ‘Look, this technology lets you investigate what’s happening in your clinic, whether it’s kids with meningitis or something else,’” DeRisi said. “We’re not telling you what to do with it. But it’s also true that if we have enough people using this, spread out all around the world, then it does become a global network for detecting emerging pandemics
  • One study found more than 1,000 different kinds of viruses in a tiny amount of human stool; another found a million in a couple of pounds of marine sediment. And most were organisms that nobody had seen before.
  • After the Biohub opened in 2016, one of DeRisi’s goals was to turn metagenomics from a rarefied technology used by a handful of elite universities into something that researchers around the world could benefit from
  • metagenomics requires enormous amounts of computing power, putting it out of reach of all but the most well-funded research labs. The tool DeRisi created, IDseq, made it possible for researchers anywhere in the world to process samples through the use of a small, off-the-shelf sequencer, much like the one DeRisi had shown me in his lab, and then upload the results to the cloud for analysis.
  • he’s the first to make the process so accessible, even in countries where lab supplies and training are scarce. DeRisi and his team tested the chemicals used to prepare DNA for sequencing and determined that using as little as half the recommended amount often worked fine. They also 3-D print some of the labs’ tools and replacement parts, and offer ongoing training and tech support
  • The metagenomic analysis itself — normally the most expensive part of the process — is provided free.
  • But DeRisi’s main innovation has been in streamlining and simplifying the extraordinarily complex computational side of metagenomics
  • IDseq is also fast, capable of doing analyses in hours that would take other systems weeks.
  • “What IDseq really did was to marry wet-lab work — accumulating samples, processing them, running them through a sequencer — with the bioinformatic analysis,”
  • “Without that, what happens in a lot of places is that the researcher will be like, ‘OK, I collected the samples!’ But because they can’t analyze them, the samples end up in the freezer. The information just gets stuck there.”
  • Meningitis itself isn’t a disease, just a description meaning that the tissues around the brain and spinal cord have become inflamed. In the United States, bacterial infections can cause meningitis, as can enteroviruses, mumps and herpes simplex. But a high proportion of cases have, as doctors say, no known etiology: No one knows why the patient’s brain and spinal tissues are swelling.
  • When Saha and her team ran the mystery meningitis samples through IDseq, though, the result was surprising. Rather than revealing a bacterial cause, as expected, a third of the samples showed signs of the chikungunya virus — specifically, a neuroinvasive strain that was thought to be extremely rare. “At first we thought, It cannot be true!” Saha recalls. “But the moment Joe and I realized it was chikungunya, I went back and looked at the other 200 samples that we had collected around the same time. And we found the virus in some of those samples as well.”
  • Until recently, chikungunya was a comparatively rare disease, present mostly in parts of Central and East Africa. “Then it just exploded through the Caribbean and Africa and across Southeast Asia into India and Bangladesh,” DeRisi told me. In 2011, there were zero cases of chikungunya reported in Latin America. By 2014, there were a million.
  • Chikungunya is a mosquito-borne virus, but when DeRisi and Saha looked at the results from IDseq, they also saw something else: a primate tetraparvovirus. Primate tetraparvoviruses are almost unknown in humans, and have been found only in certain regions. Even now, DeRisi is careful to note, it’s not clear what effect the virus has on people. “Maybe it’s dangerous, maybe it isn’t,” DeRisi says. “But I’ll tell you what: It’s now on my radar.
  • it reveals a landscape of potentially dangerous viruses that we would otherwise never find out about. “What we’ve been missing is that there’s an entire universe of pathogens out there that are causing disease in humans,” Imam notes, “ones that we often don’t even know exist.”
  • “The plan was, Let’s let researchers around the world propose studies, and we’ll choose 10 of them to start,” DeRisi recalls. “We thought we’d get, like, a couple dozen proposals, and instead we got 350.”
  • Metagenomic sequencing is especially good at what scientists call “environmental sampling”: identifying, say, every type of bacteria present in the gut microbiome, or in a teaspoon of seawater.
  • “When you draw blood from someone who has a fever in Ghana, you really don’t know very much about what would normally be in their blood without fever — let alone about other kinds of contaminants in the environment. So how do you interpret the relevance of all the things you’re seeing?”
  • Such criticisms have led some to say that metagenomics simply isn’t suited to the infrastructure of developing countries. Along with the problem of contamination, many labs struggle to get the chemical reagents needed for sequencing, either because of the cost or because of shipping and customs holdups
  • we’re less likely to be caught off-guard. “With Ebola, there’s always an issue: Where’s the virus hiding before it breaks out?” DeRisi explains. “But also, once we start sampling people who are hospitalized more widely — meaning not just people in Northern California or Boston, but in Uganda, and Sierra Leone, and Indonesia — the chance of disastrous surprises will go down. We’ll start seeing what’s hidden.”
« First ‹ Previous 41 - 60 of 989 Next › Last »
Showing 20 items per page