Skip to main content

Home/ TOK Friends/ Group items tagged shallow

Rss Feed Group items tagged

Javier E

When Your Facebook Friend Is Racist - Megan Garber - The Atlantic - 0 views

  • Psychologists Shannon Rauch and Kimberley Schanz published their work in the journal Computers in Human Behavior. They sampled 623 Internet users (all white, 70 percent students), asking them to indicate the frequency of their Facebook usage. The group then read one of three versions of a Facebook Notes page they were told was written by a 26-year-old named Jack Brown. "Jack" was white and male. The first version of Jack's message contained what the researchers call a "superiority message": It "contrasted the behaviors of black and white individuals, only to find consistent superiority of the whites."
  • The researchers then asked participants, for each version of the post, to rate factors like "how much they agreed with the message," "how accurate they found it," "how much they liked the writer," and, significantly, how likely they were to share the post with others
  • Their findings? "Frequent users are particularly disposed to be influenced by negative racial messages." The group of more-frequent Facebook users didn't differ from others in their reaction to the egalitarian message. But those users "were more positive toward the messages with racist content -- particularly the superiority message." 
  • ...8 more annotations...
  • Facebook, for all the unprecedented connection it fosters among previously atomized people, fosters a very particular kind of connection: one that is mediated, at all times, by Facebook. And one that therefore makes very particular kinds of assumptions about how and why people connect in the first place. Facebook "connection" is defined -- semantically, at least -- by friendship. ("Facebook friends," "friending people," etc.) While it doesn't assume that every connection is an actual friend, in the narrow and maybe even old-fashioned sense of the word, Facebook's infrastructure does assume esteem among people who friend each other.
  • The study itself, in fact, is confirming the hypothesis that Rauch and Schanz started with: "We predict," they noted, "that due to potential chronic traits and/or their adaptation to a Facebook culture of shallow processing and agreement, frequent Facebook users are highly susceptible to persuasive messages compared to less frequent users.
  • This is, to say the least, troubling.
  • Facebook, as a result, is structured as an aggressively upbeat place.
  • social complicity. You can argue on Facebook, but it is not really encouraged. And the interactions Facebook fosters as it expands -- the status updates, the information sharing, the news consumption -- stem from that default-positive place. "Like," but not "Dislike." "Recommend," but not "Reject."
  • That's significant, because Facebook wants to expand from social connection into informational connection. The News Feed as the "personalized newspaper"; the just-introduced Home as a mobile locus of that newspaper.
  • Heavy users of Facebook tend to use the site because of a desire for social inclusion. In that context, the study suggests, those users are primed to agree with fellow users rather than to criticize the information those users share. And not just in terms of their public interactions, but in terms of their private beliefs. This potent combination -- "a need to connect and an ethos of shallow processing" -- provides a warm, moist breeding ground for the spread of opinions, publicly and not-so-publicly. Racist ones among them.
  • What will happen if information gets fully social -- according to Facebook's definition of "fully social"? What will take place when the Jack Browns of the world aren't just our friends, but our news sources?
Javier E

If It Feels Right - NYTimes.com - 3 views

  • What’s disheartening is how bad they are at thinking and talking about moral issues.
  • you see the young people groping to say anything sensible on these matters. But they just don’t have the categories or vocabulary to do so.
  • “Not many of them have previously given much or any thought to many of the kinds of questions about morality that we asked,” Smith and his co-authors write. When asked about wrong or evil, they could generally agree that rape and murder are wrong. But, aside from these extreme cases, moral thinking didn’t enter the picture, even when considering things like drunken driving, cheating in school or cheating on a partner.
  • ...8 more annotations...
  • The default position, which most of them came back to again and again, is that moral choices are just a matter of individual taste. “It’s personal,” the respondents typically said. “It’s up to the individual. Who am I to say?”
  • “I would do what I thought made me happy or how I felt. I have no other way of knowing what to do but how I internally feel.”
  • their attitudes at the start of their adult lives do reveal something about American culture. For decades, writers from different perspectives have been warning about the erosion of shared moral frameworks and the rise of an easygoing moral individualism. Allan Bloom and Gertrude Himmelfarb warned that sturdy virtues are being diluted into shallow values. Alasdair MacIntyre has written about emotivism, the idea that it’s impossible to secure moral agreement in our culture because all judgments are based on how we feel at the moment. Charles Taylor has argued that morals have become separated from moral sources. People are less likely to feel embedded on a moral landscape that transcends self. James Davison Hunter wrote a book called “The Death of Character.” Smith’s interviewees are living, breathing examples of the trends these writers have described.
  • Smith and company found an atmosphere of extreme moral individualism — of relativism and nonjudgmentalism.
  • they have not been given the resources — by schools, institutions and families — to cultivate their moral intuitions, to think more broadly about moral obligations, to check behaviors that may be degrading.
  • the interviewees were so completely untroubled by rabid consumerism.
  • Many were quick to talk about their moral feelings but hesitant to link these feelings to any broader thinking about a shared moral framework or obligation. As one put it, “I mean, I guess what makes something right is how I feel about it. But different people feel different ways, so I couldn’t speak on behalf of anyone else as to what’s right and wrong.”
  • In most times and in most places, the group was seen to be the essential moral unit. A shared religion defined rules and practices. Cultures structured people’s imaginations and imposed moral disciplines. But now more people are led to assume that the free-floating individual is the essential moral unit. Morality was once revealed, inherited and shared, but now it’s thought of as something that emerges in the privacy of your own heart.
  •  
    Goodness, I went through a bit of emotion reading that. Whew. Gotta center. Anyhoo, I feel certainly conflicted over the author's idea of "shallow values." Personally, I don't necessarily see the need to have a shared moral framework to connect to. What is this framework if not a system to instill shame and obligation into its members? While I do think it's important to have an articulate moral opinion on relevant subjects, I also think the world cannot be divided into realms of right or wrong when we can barely see even an infinitely small part of it at one time. What's wrong with open-mindedness?
Javier E

The science of influencing people: six ways to win an argument | Science | The Guardian - 1 views

  • we have all come across people who appear to have next to no understanding of world events – but who talk with the utmost confidence and conviction
  • the latest psychological research can now help us to understand why
  • the “illusion of explanatory depth”
  • ...28 more annotations...
  • The problem is that we confuse a shallow familiarity with general concepts for real, in-depth knowledge.
  • our knowledge is also highly selective: we conveniently remember facts that support our beliefs and forget others
  • Psychological studies show that people fail to notice the logical fallacies in an argument if the conclusion supports their viewpoint
  • “motivated reasoning”
  • A high standard of education doesn’t necessarily protect us from these flaws
  • That false sense of expertise can, in turn, lead them to feel that they have the licence to be more closed-minded in their political views – an attitude known as “earned dogmatism”.
  • “People confuse their current level of understanding with their peak knowledge,”
  • Graduates, for instance, often overestimate their understanding of their degree subject:
  • recent psychological research also offers evidence-based ways towards achieving more fruitful discussions.
  • a simple but powerful way of deflating someone’s argument is to ask for more detail. “You need to get the ‘other side’ focusing on how something would play itself out, in a step by step fashion”
  • By revealing the shallowness of their existing knowledge, this prompts a more moderate and humble attitude.
  • You need to ask how something works to get the effect
  • If you are trying to debunk a particular falsehood – like a conspiracy theory or fake news – you should make sure that your explanation offers a convincing, coherent narrative that fills all the gaps left in the other person’s understanding
  • The persuasive power of well-constructed narratives means that it’s often useful to discuss the sources of misinformation, so that the person can understand why they were being misled in the first place
  • Each of our beliefs is deeply rooted in a much broader and more complex political ideology. Climate crisis denial, for instance, is now inextricably linked to beliefs in free trade, capitalism and the dangers of environmental regulation.
  • Attacking one issue may therefore threaten to unravel someone’s whole worldview – a feeling that triggers emotionally charged motivated reasoning. It is for this reason that highly educated Republicans in the US deny the overwhelming evidence.
  • disentangle the issue at hand from their broader beliefs, or to explain how the facts can still be accommodated into their worldview.
  • “All people have multiple identities,” says Prof Jay Van Bavel at New York University, who studies the neuroscience of the “partisan brain”. “These identities can become active at any given time, depending on the circumstances.”
  • you might have more success by appealing to another part of the person’s identity entirely.
  • when people are asked to first reflect on their other, nonpolitical values, they tend to become more objective in discussion on highly partisan issues, as they stop viewing facts through their ideological lens.
  • Another simple strategy to encourage a more detached and rational mindset is to ask your conversation partner to imagine the argument from the viewpoint of someone from another country
  • The aim is to help them recognise that they can change their mind on certain issues while staying true to other important elements of their personality.
  • this strategy increases “psychological distance” from the issue at hand and cools emotionally charged reasoning so that you can see things more objectively.
  • If you are considering policies with potentially long-term consequences, you could ask them to imagine viewing the situation through the eyes of someone in the future
  • people are generally much more rational in their arguments, and more willing to own up to the limits of their knowledge and understanding, if they are treated with respect and compassion.
  • Aggression, by contrast, leads them to feel that their identity is threatened, which in turn can make them closed-minded
  • Assuming that the purpose of your argument is to change minds, rather than to signal your own superiority, you are much more likely to achieve your aims by arguing gently and kindly rather than belligerently, and affirming your respect for the person, even if you are telling them some hard truths
  • As a bonus, you will also come across better to onlookers. “There’s a lot of work showing that third-party observers always attribute high levels of competence when the person is conducting themselves with more civility,”
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
Javier E

Elon Musk's Texts Shatter the Myth of the Tech Genius - The Atlantic - 0 views

  • The texts also cast a harsh light on the investment tactics of Silicon Valley’s best and brightest. There’s Calacanis’s overeager angel-investing pitches, and then you have the
  • “This is one of the most telling things I’ve ever seen about how investing works in Silicon Valley,” Jessica Lessin, the founder of the tech publication The Information, tweeted of the Andreessen exchange. Indeed, both examples from the document offer a look at the boys’ club and power networks of the tech world in action.
  • the eagerness to pony up for Musk and the lazy quality of this dealmaking reveal something deeper about the brokenness of this investment ecosystem and the ways that it is driven more by vibes and grievances than due diligence.
  • ...3 more annotations...
  • For this crew, the early success of their past companies or careers is usually prologue, and their skills will, of course, transfer to any area they choose to conquer (including magically solving free speech). But what they are actually doing is winging it.
  • There is a tendency, especially when it comes to the über-rich and powerful, to assume and to fantasize about what we can’t see. We ascribe shadowy brilliance or malevolence, which may very well be unearned or misguided.
  • What’s striking about the Musk messages, then, is the similarity between these men’s behavior behind closed doors and in public on Twitter. Perhaps the real revelation here is that the shallowness you see is the shallowness you get.
Javier E

There's More to Life Than Being Happy - Emily Esfahani Smith - The Atlantic - 1 views

  • "Everything can be taken from a man but one thing," Frankl wrote in Man's Search for Meaning, "the last of the human freedoms -- to choose one's attitude in any given set of circumstances, to choose one's own way."
  • This uniqueness and singleness which distinguishes each individual and gives a meaning to his existence has a bearing on creative work as much as it does on human love. When the impossibility of replacing a person is realized, it allows the responsibility which a man has for his existence and its continuance to appear in all its magnitude. A man who becomes conscious of the responsibility he bears toward a human being who affectionately waits for him, or to an unfinished work, will never be able to throw away his life. He knows the "why" for his existence, and will be able to bear almost any "how."
  • "To the European," Frankl wrote, "it is a characteristic of the American culture that, again and again, one is commanded and ordered to 'be happy.' But happiness cannot be pursued; it must ensue. One must have a reason to 'be happy.'"
  • ...19 more annotations...
  • the book's ethos -- its emphasis on meaning, the value of suffering, and responsibility to something greater than the self -- seems to be at odds with our culture, which is more interested in the pursuit of individual happiness than in the search for meaning.
  • "Happiness without meaning characterizes a relatively shallow, self-absorbed or even selfish life, in which things go well, needs and desire are easily satisfied, and difficult or taxing entanglements are avoided,"
  • about 4 out of 10 Americans have not discovered a satisfying life purpose. Forty percent either do not think their lives have a clear sense of purpose or are neutral about whether their lives have purpose. Nearly a quarter of Americans feel neutral or do not have a strong sense of what makes their lives meaningful
  • the single-minded pursuit of happiness is ironically leaving people less happy, according to recent research. "It is the very pursuit of happiness," Frankl knew, "that thwarts happiness."
  • Examining their self-reported attitudes toward meaning, happiness, and many other variables -- like stress levels, spending patterns, and having children -- over a month-long period, the researchers found that a meaningful life and happy life overlap in certain ways, but are ultimately very different. Leading a happy life, the psychologists found, is associated with being a "taker" while leading a meaningful life corresponds with being a "giver."
  • How do the happy life and the meaningful life differ?
  • While happiness is an emotion felt in the here and now, it ultimately fades away, just as all emotions do
  • Happiness, they found, is about feeling good. Specifically, the researchers found that people who are happy tend to think that life is easy, they are in good physical health, and they are able to buy the things that they need and want.
  • Most importantly from a social perspective, the pursuit of happiness is associated with selfish behavior -- being, as mentioned, a "taker" rather than a "giver." The psychologists give an evolutionary explanation for this: happiness is about drive reduction. If you have a need or a desire -- like hunger -- you satisfy it, and that makes you happy. People become happy, in other words, when they get wh
  • Happy people get a lot of joy from receiving benefits from others while people leading meaningful lives get a lot of joy from giving to others,"
  • People who have high meaning in their lives are more likely to help others in need.
  • What sets human beings apart from animals is not the pursuit of happiness, which occurs all across the natural world, but the pursuit of meaning, which is unique to humans
  • People whose lives have high levels of meaning often actively seek meaning out even when they know it will come at the expense of happiness. Because they have invested themselves in something bigger than themselves, they also worry more and have higher levels of stress and anxiety in their lives than happy people.
  • Meaning is not only about transcending the self, but also about transcending the present moment -- which is perhaps the most important finding of the study,
  • nearly 60 percent all Americans today feel happy without a lot of stress or worry
  • Meaning, on the other hand, is enduring. It connects the past to the present to the future. "Thinking beyond the present moment, into the past or future, was a sign of the relatively meaningful but unhappy life,"
  • Having negative events happen to you, the study found, decreases your happiness but increases the amount of meaning you have in life.
  • "If there is meaning in life at all," Frankl wrote, "then there must be meaning in suffering."
  • "Being human always points, and is directed, to something or someone, other than oneself -- be it a meaning to fulfill or another human being to encounter. The more one forgets himself -- by giving himself to a cause to serve or another person to love -- the more human he is."
Javier E

Do Your Friends Actually Like You? - The New York Times - 1 views

  • Recent research indicates that only about half of perceived friendships are mutual. That is, someone you think is your friend might not be so keen on you. Or, vice versa, as when someone you feel you hardly know claims you as a bestie.
  • “The notion of doing nothing but spending time in each other’s company has, in a way, become a lost art,” replaced by volleys of texts and tweets, Mr. Sharp said. “People are so eager to maximize efficiency of relationships that they have lost touch with what it is to be a friend.”
  • It’s a concern because the authenticity of one’s relationships has an enormous impact on one’s health and well-being.
  • ...11 more annotations...
  • The study analyzed friendship ties among 84 subjects (ages 23 to 38) in a business management class by asking them to rank one another on a five-point continuum of closeness from “I don’t know this person” to “One of my best friends.” The feelings were mutual 53 percent of the time while the expectation of reciprocity was pegged at 94 percent. This is consistent with data from several other friendship studies conducted over the past decade, encompassing more than 92,000 subjects, in which the reciprocity rates ranged from 34 percent to 53 percent.
  • “Friendship is difficult to describe,” said Alexander Nehamas, a professor of philosophy at Princeton, who in his latest book, “On Friendship,” spends almost 300 pages trying to do just that. “It’s easier to say what friendship is not and, foremost, it is not instrumental.”
  • It is not a means to obtain higher status, wangle an invitation to someone’s vacation home or simply escape your own boredom. Rather, Mr. Nehamas said, friendship is more like beauty or art, which kindles something deep within us and is “appreciated for its own sake.
  • “Treating friends like investments or commodities is anathema to the whole idea of friendship,” said Ronald Sharp, a professor of English at Vassar College, who teaches a course on the literature of friendship. “It’s not about what someone can do for you, it’s who and what the two of you become in each other’s presence.”
  • Some blame human beings’ basic optimism, if not egocentrism, for the disconnect between perceived and actual friendships. Others point to a misunderstanding of the very notion of friendship in an age when “friend” is used as a verb, and social inclusion and exclusion are as easy as a swipe or a tap on a smartphone screen.
  • By his definition, friends are people you take the time to understand and allow to understand you.
  • Because time is limited, so, too, is the number of friends you can have, according to the work of the British evolutionary psychologist Robin I.M. Dunbar. He describes layers of friendship, where the topmost layer consists of only one or two people, say a spouse and best friend with whom you are most intimate and interact daily. The next layer can accommodate at most four people for whom you have great affinity, affection and concern and who require weekly attention to maintain. Out from there, the tiers contain more casual friends with whom you invest less time and tend to have a less profound and more tenuous connection. Without consistent contact, they easily fall into the realm of acquaintance. You may be friendly with them but they aren’t friends.
  • “There is a limited amount of time and emotional capital we can distribute, so we only have five slots for the most intense type of relationship,” Mr. Dunbar said. “People may say they have more than five but you can be pretty sure they are not high-quality friendships.
  • Such boasting implies they have soul mates to spare in a culture where we are taught that leaning on someone is a sign of weakness and power is not letting others affect you. But friendship requires the vulnerability of caring as well as revealing things about yourself that don’t match the polished image in your Facebook profile or Instagram feed, said Mr. Nehamas at Princeton. Trusting that your bond will continue, and might even be strengthened, despite your shortcomings and inevitable misfortunes, he said, is a risk many aren’t willing to take.
  • According to medical experts, playing it safe by engaging in shallow, unfulfilling or nonreciprocal relationships has physical repercussions. Not only do the resulting feelings of loneliness and isolation increase the risk of death as much as smoking, alcoholism and obesity; you may also lose tone, or function, in the so-called smart vagus nerve, which brain researchers think allows us to be in intimate, supportive and reciprocal relationships in the first place.
  • In the presence of a true friend, Dr. Banks said, the smart or modulating aspect of the vagus nerve is what makes us feel at ease rather than on guard as when we are with a stranger or someone judgmental. It’s what enables us to feel O.K. about exposing the soft underbelly of our psyche and helps us stay engaged and present in times of conflict. Lacking authentic friendships, the smart vagus nerve is not exercised. It loses tone and one’s anxiety remains high, making abiding, deep connections difficult.
nataliedepaulo1

Before Vaquitas Vanish, a Desperate Bid to Save Them - The New York Times - 0 views

  • SAN FELIPE, Mexico — In the shallow sea waters of the Gulf of California swims a porpoise that few have seen, its numbers dwindling so fast that its very existence is now in peril.
  • “If you can’t remove the threats, the population keeps declining,” Dr. Turvey said. “You don’t have time for complacency.”
Duncan H

Rick Santorum Campaigning Against the Modern World - NYTimes.com - 0 views

  • As a journalist who covered Rick Santorum in Pennsylvania for years, I can understand the Tea Party’s infatuation with him. It’s his anger. It is in perfect synch with the constituency he is wooing.
  • Even at the height of his political success, when he had a lot to be happy about, Santorum was an angry man. I found it odd. I was used to covering politicians who had good dispositions — or were good at pretending they had good dispositions.
  • You could easily get him revved by bringing up the wrong topic or taking an opposing point of view. His nostrils would flare, his eyes would glare and he would launch into a disquisition on how, deep down, you were a shallow guy who could not grasp the truth and rightness of his positions.
  • ...11 more annotations...
  • “It’s just a curious bias of the media around here. It’s wonderful. One person says something negative and the media rushes and covers that. The wonderful balanced media that I love in this community.”
  • Santorum had reason to be peeved. He was running against the Democrat Bob Casey. He was trailing by double digits and knew he was going to lose. He was not a happy camper, but then he rarely is.
  • As he has shown in the Republican debates, Santorum can be equable. The anger usually flares on matters closest to his heart: faith, family and morals. And if, by chance, you get him started on the role of religion in American life, get ready for a Vesuvius moment.
  • Outside of these areas, he was more pragmatic. Then and now, Santorum held predictably conservative views, but he was astute enough to bend on some issues and be — as he put it in the Arizona debate — “a team player.”
  • In the Senate, he represented a state with a relentlessly moderate-to-centrist electorate so when campaigning he emphasized the good deeds he did in Washington. Editorial board meetings with Santorum usually began with him listing federal money he had brought in for local projects.People who don’t know him — and just see the angry Rick — don’t realize what a clever politician Santorum is. He didn’t rise to become a Washington insider through the power of prayer. He may say the Rosary, but he knows his Machiavelli.
  • That said, Santorum’s anger is not an act.  It is genuine. It has its roots in the fact that he had the misfortune to be born in the second half of the 20th century. In his view, it was an era when moral relativism and anti-religious feeling held sway, where traditional values were ignored or mocked, where heretics ruled civic and political life. If anything, it’s gotten worse in the 21st, with the election of Barack Obama.Leave it to Santorum to attack Obama on his theology, of all things. He sees the president as an exemplar of mushy, feel-good Christianity that emphasizes tolerance over rectitude, and the love of Jesus over the wrath of God.
  • Like many American Catholics, I struggle with the church’s teachings as they apply to the modern world. Santorum does not.
  • I once wrote that Santorum has one of the finest minds of the 13th century. It was meant to elicit a laugh, but there’s truth behind the remark. No Vatican II for Santorum. His belief system is the fixed and firm Catholicism of the Council of Trent in the mid-16th century. And Santorum is a warrior for those beliefs.
  • During the campaign, he has regularly criticized the media for harping on his public statements on homosexuality, contraception, abortion, the decline in American morals. Still, he can’t resist talking about them. These are the issues that get his juices flowing, not the deficit or federal energy policy.
  • Santorum went to Houston not to praise Kennedy but to bash him. To Santorum, the Kennedy speech did permanent damage because it led to secularization of American politics. He said it laid the foundation for attacks on religion by the secular left that has led to denial of free speech rights to religious people. “John F. Kennedy chose not to just dispel fear,” Santorum said, “he chose to expel faith.”
  • Ultimately Kennedy’s attempt to reassure Protestants that the Catholic Church would not control the government and suborn its independence advanced a philosophy of strict separation that would create a purely secular public square cleansed of all religious wisdom and the voice of religious people of all faiths. He laid the foundation for attacks on religious freedom and freedom of speech by the secular left and its political arms like the A.C.L.U and the People for the American Way. This has and will continue to create dissension and division in this country as people of faith increasingly feel like second-class citizens.One consequence of Kennedy’s speech, Santorum said,is the debasement of our First Amendment right of religious freedom. Of all the great and necessary freedoms listed in the First Amendment, freedom to exercise religion (not just to believe, but to live out that belief) is the most important; before freedom of speech, before freedom of the press, before freedom of assembly, before freedom to petition the government for redress of grievances, before all others. This freedom of religion, freedom of conscience, is the trunk from which all other branches of freedom on our great tree of liberty get their life.As so it went for 5,000 words. It is a revelatory critique of the modern world and Santorum quoted G.K. Chesterton, Edmund Burke, St. Thomas Aquinas and Martin Luther King to give heft to his assertions.That said, it was an angry speech, conjuring up images of people of faith cowering before leftist thought police. Who could rescue us from this predicament? Who could banish the secularists and restore religious morality to its throne?
  •  
    An interesting critique of Santorum and his religious beliefs.
Javier E

Elon studies future of "Generation Always-On" - 1 views

  • Elon studies the future of "Generation Always-On"
  • By the year 2020, it is expected that youth of the “always-on generation,” brought up from childhood with a continuous connection to each other and to information, will be nimble, quick-acting multitaskers who count on the Internet as their external brain and who approach problems in a different way from their elders. "There is no doubt that brains are being rewired,"
  • the Internet Center, refers to the teens-to-20s age group born since the turn of the century as Generation AO, for “always-on." “They have grown up in a world that has come to offer them instant access to nearly the entirety of human knowledge, and incredible opportunities to connect, create and collaborate,"
  • ...10 more annotations...
  • some said they are already witnessing deficiencies in young peoples’ abilities to focus their attention, be patient and think deeply. Some experts expressed concerns that trends are leading to a future in which most people become shallow consumers of information, endangering society."
  • Many of the respondents in this survey predict that Gen AO will exhibit a thirst for instant gratification and quick fixes and a lack of patience and deep-thinking ability due to what one referred to as “fast-twitch wiring.”
  • “The replacement of memorization by analysis will be the biggest boon to society since the coming of mass literacy in the late 19th to early 20th century.” — Paul Jones, University of North Carolina-Chapel Hill
  • “Teens find distraction while working, distraction while driving, distraction while talking to the neighbours. Parents and teachers will have to invest major time and efforts into solving this issue – silence zones, time-out zones, meditation classes without mobile, lessons in ignoring people.”
  • “Society is becoming conditioned into dependence on technology in ways that, if that technology suddenly disappears or breaks down, will render people functionally useless. What does that mean for individual and social resiliency?
  • “Short attention spans resulting from quick interactions will be detrimental to focusing on the harder problems and we will probably see a stagnation in many areas: technology, even social venues such as literature. The people who will strive and lead the charge will be the ones able to disconnect themselves to focus.”
  • “The underlying issue is that they will become dependent on the Internet in order to solve problems and conduct their personal, professional, and civic lives. Thus centralized powers that can control access to the Internet will be able to significantly control future generations. It will be much as in Orwell's 1984, where control was achieved by using language to shape and limit thought, so future regimes may use control of access to the Internet to shape and limit thought.”
  • “Increasingly, teens and young adults rely on the first bit of information they find on a topic, assuming that they have found the ‘right’ answer, rather than using context and vetting/questioning the sources of information to gain a holistic view of a topic.”
  • “Parents and kids will spend less time developing meaningful and bonded relationships in deference to the pursuit and processing of more and more segmented information competing for space in their heads, slowly changing their connection to humanity.”
  • “It’s simply not possible to discuss, let alone form societal consensus around major problems without lengthy, messy conversations about those problems. A generation that expects to spend 140 or fewer characters on a topic and rejects nuance is incapable of tackling these problems.”
Javier E

Teaching a Different Shakespeare From the One I Love - The New York Times - 0 views

  • Even the highly gifted students in my Shakespeare classes at Harvard are less likely to be touched by the subtle magic of his words than I was so many years ago or than my students were in the 1980s in Berkeley, Calif. What has happened? It is not that my students now lack verbal facility. In fact, they write with ease, particularly if the format is casual and resembles the texting and blogging that they do so constantly. The problem is that their engagement with language, their own or Shakespeare’s, often seems surprisingly shallow or tepid.
  • There are many well-rehearsed reasons for the change: the rise of television followed by the triumph of digital technology, the sending of instant messages instead of letters, the ‘‘visual turn’’ in our culture, the pervasive use of social media. In their wake, the whole notion of a linguistic birthright could be called quaint, the artifact of particular circumstances that have now vanished
  • For my parents, born in Boston, the English language was a treasured sign of arrival and rootedness; for me, a mastery of Shakespeare, the supreme master of that language, was like a purchased coat of arms, a title of gentility tracing me back to Stratford-upon-Avon.
  • ...6 more annotations...
  • It is not that the English language has ceased to be a precious possession; on the contrary, it is far more important now than it ever was in my childhood. But its importance has little or nothing to do any longer with the dream of rootedness. English is the premier international language, the global medium of communication and exchange.
  • This does not mean that I should abandon the paper assignment; it is an important form of training for a range of very different challenges that lie in their future. But I see that their deep imaginative engagement with Shakespeare, their intoxication, lies elsewhere.
  • When I ask them to write a 10-page paper analyzing a particular web of metaphors, exploring a complex theme or amassing evidence to support an argument, the results are often wooden; when I ask them to analyze a film clip, perform a scene or make a video, I stand a better chance of receiving something extraordinary.
  • as I have discovered in my teaching, it is a different Shakespeare from the one with whom I first fell in love. Many of my students may have less verbal acuity than in years past, but they often possess highly developed visual, musical and performative skills. They intuitively grasp, in a way I came to understand only slowly, the pervasiveness of songs in Shakespeare’s plays, the strange ways that his scenes flow one into another or the cunning alternation of close-ups and long views
  • The M.I.T. Global Shakespeare Project features an electronic archive that includes images of every page of the First Folio of 1623. In the Norton Shakespeare, which I edit, the texts of his plays are now available not only in the massive printed book with which I was initiated but also on a digital platform. One click and you can hear each song as it might have sounded on the Elizabethan stage; another click and you listen to key scenes read by a troupe of professional actors. It is a kind of magic unimagined even a few years ago or rather imaginable only as the book of a wizard like Prospero in ‘‘The Tempest.’
  • But it is not the new technology alone that attracts students to Shakespeare; it is still more his presence throughout the world as the common currency of humanity. In Taiwan, Tokyo and Nanjing, in a verdant corner of the Villa Borghese gardens in Rome and in an ancient garden in Kabul, in Berlin and Bangkok and Bangalore, his plays continue to find new and unexpected ways to enchant.
Javier E

How the Internet Gets Inside Us : The New Yorker - 0 views

  • It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness.
  • ...14 more annotations...
  • In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.”
  • when people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix.
  • The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television (and Canadian television, at that) in 1965.
  • If all you have is a hammer, the saying goes, everything looks like a nail; and, if you think the world is broken, every machine looks like the hammer that broke it.
  • Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began.
  • Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers—as, for that matter, did the Hermione-like idea of “looking things up.” That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
  • What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interactions with other people—are now easily internalized, made to feel like mere workings of the id left on its own.
  • A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them.
  • And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.
  • Now television is the harmless little fireplace over in the corner, where the family gathers to watch “Entourage.” TV isn’t just docile; it’s positively benevolent. This makes you think that what made television so evil back when it was evil was not its essence but its omnipresence. Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user.
  • the Internet screen has always been like the palantír in Tolkien’s “Lord of the Rings”—the “seeing stone” that lets the wizards see the entire world. Its gift is great; the wizard can see it all. Its risk is real: evil things will register more vividly than the great mass of dull good. The peril isn’t that users lose their knowledge of the world. It’s that they can lose all sense of proportion. You can come to think that the armies of Mordor are not just vast and scary, which they are, but limitless and undefeatable, which they aren’t.
Keiko E

Blanks for the Memories - WSJ.com - 0 views

  • "By 10, those memories are crystallized. Those are the memories we keep," says psychologist Carole Peterson at Memorial University of Newfoundland, the lead investigator. "It's the memories from earliest childhood that we lose."
  • Modern researchers think that storing and retrieving memories require language skills that don't develop until age 3 or 4. Others believe that while children can recall fragments of scenes from early life, they can't create autobiographical memories—the episodes that make up one's life story—until they have a firm concept of "self," which may take a few more years.
  • Indeed, experts say that if parents want their children to remember particular events from their early lives, they should discuss them in as much detail as possible and help children see their significance.
  • ...2 more annotations...
  • Scientists think the brain's prefrontal cortex processes experiences, using sensory input from the eyes, ears, nose and mouth, sorts them into categories, and tags the various memory fragments with specific associations (smells of home, friends from camp, bugs, a pet, for example). When a memory cue comes in, the brain searches its circuits for related fragments and assembles them like a jigsaw puzzle. Some fragments bring associated fragments along, which is why one old memory often leads to others. Tastes and smells are particularly evocative
  • Each time people bring up the same memory, those related fragments and circuits become stronger. "When you are 80 years old, remembering your kindergarten days, it's really the memory of a memory of a memory," says Dr. Devi. That may help explain why children's earliest memories are so unstable: Their neural traces are weak and shallow, whereas the few memories we revisit as we get older lay down stronger traces. Still, because the brain is constantly reassembling the fragments, they are vulnerable to distortion.
Javier E

The Problem With Meaning - NYTimes.com - 0 views

  • meaning” has become the stand-in concept for everything the soul yearns for and seeks. It is one of the few phrases acceptable in modern parlance to describe a fundamentally spiritual need.
  • what do we mean when we use the word meaning?
  • as commonly used today, the word is flabby and vacuous, the product of a culture that has grown inarticulate about inner life.
  • ...10 more annotations...
  • meaning is an uplifting state of consciousness. It’s what you feel when you’re serving things beyond self.
  • They subscribed to moral systems — whether secular or religious — that recommended specific ways of being, and had specific structures of what is right and wrong, and had specific disciplines about how you might get better over time.
  • Meaningfulness tries to replace structures, standards and disciplines with self-regarding emotion. The ultimate authority of meaningful is the warm tingling we get when we feel significant and meaningful.
  • The philosophy of meaningfulness emerges in a culture in which there is no common moral vocabulary or framework. It emerges amid radical pluralism, when people don’t want to judge each other. Meaningfulness emerges when the fundamental question is, do we feel good?
  • Meaningfulness tries to replace moral systems with the emotional corona that surrounds acts of charity.
  • Because meaningfulness is built solely on an emotion, it is contentless and irreducible. Because it is built solely on emotion, it’s subjective and relativistic. You get meaning one way. I get meaning another way. Who is any of us to judge another’s emotion?
  • Because it’s based solely on sentiment, it is useless. There are no criteria to determine what kind of meaningfulness is higher. There’s no practical manual that would help guide each of us as we move from shallower forms of service to deeper ones. There is no hierarchy of values that would help us select, from among all the things we might do, that activity which is highest and best to do.
  • Because it’s based solely on emotion, it’s fleeting. When the sensations of meaningful go away then the cause that once aroused them gets dropped, too. Ennui floods in. Personal crisis follows. There’s no reliable ground.
  • If we look at the people in history who achieved great things — like Nelson Mandela or Albert Schweitzer or Abraham Lincoln — it wasn’t because they wanted to bathe luxuriously in their own sense of meaningfulness. They had objective and eternally true standards of justice and injustice. They were indignant when those eternal standards were violated.
  • Real moral systems are based on a balance of intellectual rigor and aroused moral sentiments.
Ellie McGinnis

Meaningful Activities Protect the Brain From Depression - Olga Khazan - The Atlantic - 2 views

  • Aristotle famously said there were two basic types of joy: hedonia, or that keg-standing, Netflix binge-watching, Nutella-from-the-jar selfish kind of pleasure, and eudaimonia, or the pleasure that comes from helping others, doing meaningful work, and otherwise leading a life well-lived.
  • "Happiness without meaning characterizes a relatively shallow, self-absorbed or even selfish life, in which things go well, needs and desire are easily satisfied, and difficult or taxing entanglements are avoided,
  • “While happiness is an emotion felt in the here and now, it ultimately fades away, just as all emotions do ... Meaning, on the other hand, is enduring. It connects the past to the present to the future.”
  • ...6 more annotations...
  • scientists have found they can measure the amount that a person enjoys something by taking MRIs of activation levels in the ventral striatum—the “reward center” nestled in the bullseye of the brain. The ventral striata of teens, in particular, tend to light up especially brightly in response to all kinds of rewards
  • teens brains are so sensitive to these little jolts of pleasure—or lack thereof—late adolescence is also when depression peaks for many people.
  • the researchers followed a group of 39 teenagers over the course of one year to see whether the way their brains reacted to either eudaimonic or hedonic rewards correlated with how depressed they felt over time.
  • the teens who had the greatest brain response to the generous, family-donation financial decision had the greatest declines in depressive symptoms over time. And those who got a boost from the risk-taking game were more likely to have an increase in depression. The types of rewards the teens responded to, it seems, changed their behavior in ways that altered their overall well-being.
  • It’s important to note that this doesn’t necessarily mean parents can inoculate their teens against depression by forcing them to seek happiness through volunteering. But it could be that teens who already do that kind of thing because it really does lift their spirits are likely to have that lift stick with them.
  • “Taken together, our findings suggest that well-being may depend on attending to higher values related to family, culture, and morality, rather than to immediate, selfish pleasure,”
Javier E

Narcissism Is Increasing. So You're Not So Special. - The New York Times - 1 views

  • A 2010 study in the journal Social Psychological and Personality Science found that the percentage of college students exhibiting narcissistic personality traits, based on their scores on the Narcissistic Personality Inventory, a widely used diagnostic test, has increased by more than half since the early 1980s, to 30 percent. In their book “Narcissism Epidemic,” the psychology professors Jean M. Twenge and W. Keith Campbell show that narcissism has increased as quickly as obesity has since the 1980s. Even our egos are getting fat.
  • This is a costly problem. While full-blown narcissists often report high levels of personal satisfaction, they create havoc and misery around them. There is overwhelming evidence linking narcissism with lower honesty and raised aggression.
  • narcissism isn’t an either-or characteristic. It’s more of a set of progressive symptoms (like alcoholism) than an identifiable state (like diabetes). Millions of Americans exhibit symptoms, but still have a conscience and a hunger for moral improvement. At the very least, they really don’t want to be terrible people.
  • ...13 more annotations...
  • Rousseau wrote about “amour-propre,” a kind of self-love based on the opinions of others. He considered it unnatural and unhealthy, and believed that arbitrary social comparison led to people wasting their lives trying to look and sound attractive to others.
  • Narcissus falls in love not with himself, but with his reflection. In the modern version, Narcissus would fall in love with his own Instagram feed, and starve himself to death while compulsively counting his followers.
  • If our egos are obese with amour-propre, social media can indeed serve up the empty emotional carbs we crave. Instagram and the like doesn’t create a narcissist, but studies suggest it acts as an accelerant — a near ideal platform to facilitate what psychologists call “grandiose exhibitionism.”
  • No doubt you have seen this in others, and maybe even a little of it in yourself as you posted a flattering selfie — and then checked back 20 times for “likes.”
  • A healthy self-love that leads to true happiness is what Rousseau called “amour de soi.” It builds up one’s intrinsic well-being, as opposed to feeding shallow cravings to be admired.
  • First, take the Narcissistic Personality Inventory test.
  • Here is an individual self-improvement strategy that combines a healthy self-love (for Valentine’s Day) with a small sacrifice (possibly for Lent).
  • Cultivating amour de soi requires being fully alive at this moment, as opposed to being virtually alive while wondering what others think. The soulful connection with another person, the enjoyment of a beautiful hike alone (not shared on Facebook) or a prayer of thanks over your sleeping child (absent a #blessed tweet) could be considered expressions of amour de soi.
  • Second, get rid of the emotional junk food that is feeding any unhealthy self-obsession. Make a list of opinions to disregard — especially those of flatterers and critics — and review the list each day. Resolve not to waste a moment trying to impress others,
  • Third, go on a social media fast. Post to communicate, praise and learn — never to self-promote.
  • As for clinically significant narcissism—along with greed, invidious prejudice, and habitual lying—it is simply another one of our anti-social behaviors that mutated from our basic genetic drives…in this case the drive to survive. The opposite of narcissism is empathy, a brain-wiring that evolved much later and in parallel with our increased reliance on social interaction as a means to improve the chances of sending our genes down the line (the drive to reproduce). There is thus a certain irony in the fact that the misnamed “social” media are encouraging a decline in empathy. Your thoughts?
  • Sure you're not confusing narcissism with vanity? If you've ever had the misfortune of having someone with narcissistic personality disorder in your life, you would know it's about more than selfies and seeking constant approval. They are truly sick individuals that destroy the lives of those they claim to love.I would say people's addictions to social media "likes" and posting selfies is vanity
  • Perhaps we need to distinguish between Narcissistic Personality Disorder (NPD) and the adjective "narcissistic." We all know lots of people with way too much self-regard. NPD on the other hand ruins lives and certainly families. People who have NPD are way beyond self centered. They see the world as black and white and all people they interact with become reflections. People with NPD go to extreme lengths to control those around them and will lie, cheat and steal to do that. They are never wrong the other person is always wrong. I have worked for Narcissists and lived with one. Let's not throw around this term without defining it, please.
kushnerha

There's nothing wrong with grade inflation - The Washington Post - 0 views

  • By the early ’90s, so long as one had the good sense to major in the humanities — all bets were off in the STEM fields — it was nearly impossible to get a final grade below a B-minus at an elite college. According to a 2012 study, the average college GPA, which in the 1930s was a C-plus, had risen to a B at public universities and a B-plus at private schools. At Duke, Pomona and Harvard, D’s and F’s combine for just 2 percent of all grades. A Yale report found that 62 percent of all Yale grades are A or A-minus. According to a 2013 article in the Harvard Crimson, the median grade at Harvard was an A-minus , while the most common grade was an A.
  • The result is widespread panic about grade inflation at elite schools. (The phenomenon is not as prevalent at community colleges and less-selective universities.) Some blame students’ consumer mentality, a few see a correlation with small class sizes (departments with falling enrollments want to keep students happy), and many cite a general loss of rigor in a touchy-feely age.
  • Yet whenever elite schools have tried to fight grade inflation, it’s been a mess. Princeton instituted strict caps on the number of high grades awarded, then abandoned the plan, saying the caps dissuaded applicants and made students miserable. At Wellesley, grade-inflated humanities departments mandated that the average result in their introductory and intermediate classes not exceed a B-plus. According to one study, enrollment fell by one-fifth, and students were 30 percent less likely to major in one of these subjects.
  • ...12 more annotations...
  • I liked the joy my students found when they actually earned a grade they’d been reaching for. But whereas I once thought we needed to contain grades, I now see that we may as well let them float skyward. If grade inflation is bad, fighting it is worse. Our goal should be ending the centrality of grades altogether. For years, I feared that a world of only A’s would mean the end of meaningful grades; today, I’m certain of it. But what’s so bad about that?
  • It’s easy to see why schools want to fight grade inflation. Grades should motivate certain students: those afraid of the stigma of a bad grade or those ambitious, by temperament or conditioning, to succeed in measurable ways. Periodic grading during a term, on quizzes, tests or papers, provides feedback to students, which should enable them to do better. And grades theoretically signal to others, such as potential employers or graduate schools, how well the student did. (Grade-point averages are also used for prizes and class rankings, though that doesn’t strike me as an important feature.)
  • But it’s not clear that grades work well as motivators. Although recent research on the effects of grades is limited, several studies in the 1970s, 1980s and 1990s measured how students related to a task or a class when it was graded compared to when it was ungraded. Overall, graded students are less interested in the topic at hand and — and, for obvious, common-sense reasons — more inclined to pick the easiest possible task when given the chance. In the words of progressive-education theorist Alfie Kohn, author of “The Homework Myth,” “the quality of learning declines” when grades are introduced, becoming “shallower and more superficial when the point is to get a grade.”
  • Even where grades can be useful, as in describing what material a student has mastered, they are remarkably crude instruments. Yes, the student who gets a 100 on a calculus exam probably grasps the material better than the student with a 60 — but only if she retains the knowledge, which grades can’t show.
  • I still can’t say very well what separates a B from an A. What’s more, I never see the kind of incompetence or impudence that would merit a D or an F. And now, in our grade-inflated world, it’s even harder to use grades to motivate, or give feedback, or send a signal to future employers or graduate schools.
  • According to a 2012 study by the Chronicle of Higher Education, GPA was seventh out of eight factors employers considered in hiring, behind internships, extracurricular activities and previous employment. Last year, Stanford’s registrar told the Chronicle about “a clamor” from employers “for something more meaningful” than the traditional transcript. The Lumina Foundation gave a$1.27 million grant to two organizations for college administrators working to develop better student records, with grades only one part of a student’s final profile.
  • Some graduate schools, too, have basically ditched grades. “As long as you don’t bomb and flunk out, grades don’t matter very much in M.F.A. programs,” the director of one creative-writing program told the New York Times. To top humanities PhD programs, letters of reference and writing samples matter more than overall GPA (although students are surely expected to have received good grades in their intended areas of study). In fact, it’s impossible to get into good graduate or professional schools without multiple letters of reference, which have come to function as the kind of rich, descriptive comments that could go on transcripts in place of grades.
  • suggests that GPAs serve not to validate students from elite schools but to keep out those from less-prestigious schools and large public universities, where grades are less inflated. Grades at community colleges “have actually dropped” over the years, according to Stuart Rojstaczer, a co-author of the 2012 grade-inflation study. That means we have two systems: one for students at elite schools, who get jobs based on references, prestige and connections, and another for students everywhere else, who had better maintain a 3.0. Grades are a tool increasingly deployed against students without prestige.
  • The trouble is that, while it’s relatively easy for smaller colleges to go grade-free, with their low student-to-teacher ratios, it’s tough for professors at larger schools, who must evaluate more students, more quickly, with fewer resources. And adjuncts teaching five classes for poverty wages can’t write substantial term-end comments, so grades are a necessity if they want to give any feedback at all.
  • It would mean hiring more teachers and paying them better (which schools should do anyway). And if transcripts become more textured, graduate-school admission offices and employers will have to devote more resources to reading them, and to getting to know applicants through interviews and letters of reference — a salutary trend that is underway already.
  • When I think about getting rid of grades, I think of happier students, with whom I have more open, democratic relationships. I think about being forced to pay more attention to the quiet ones, since I’ll have to write something truthful about them, too. I’ve begun to wonder if a world without grades may be one of those states of affairs (like open marriages, bicycle lanes and single-payer health care) that Americans resist precisely because they seem too good, suspiciously good. Nothing worth doing is supposed to come easy.
  • Alfie Kohn, too, sees ideology at work in the grade-inflation panic. “Most of what powers the arguments against grade inflation is a very right-wing idea that excellence consists in beating everyone else around you,” he says. “Even when you have sorted them — even when they get to Harvard! — we have to sort them again.” In other words, we can trust only a system in which there are clear winners and losers.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 1 views

  • Skinner's approach stressed the historical associations between a stimulus and the animal's response -- an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past.
  • Chomsky's conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations.
  • Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment.
  • ...17 more annotations...
  • David Marr, a neuroscientist colleague of Chomsky's at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision,
  • a complex biological system can be understood at three distinct levels. The first level ("computational level") describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain's identification of the objects present in the image we had observed. The second level ("algorithmic level") describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level ("implementation level") describes how our own biological hardware of cells implements the procedure described by the algorithmic level.
  • The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the "black box" that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.
  • As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner's behaviorist paradigm -- an achievement commonly referred to as the "cognitive revolution,"
  • While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists
  • Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field's heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the "new AI" -- focused on using statistical learning techniques to better mine and predict data -- is unlikely to yield general principles about the nature of intelligent beings or about cognition.
  • Chomsky acknowledged that the statistical approach might have practical value, just as in the example of a useful search engine, and is enabled by the advent of fast computers capable of processing massive data. But as far as a science goes, Chomsky would argue it is inadequate, or more harshly, kind of shallow
  • An unlikely pair, systems biology and artificial intelligence both face the same fundamental task of reverse-engineering a highly complex system whose inner workings are largely a mystery
  • Implicit in this endeavor is the assumption that with enough sophisticated statistical tools and a large enough collection of data, signals of interest can be weeded it out from the noise in large and poorly understood biological systems.
  • Brenner, a contemporary of Chomsky who also participated in the same symposium on AI, was equally skeptical about new systems approaches to understanding the brain. When describing an up-and-coming systems approach to mapping brain circuits called Connectomics, which seeks to map the wiring of all neurons in the brain (i.e. diagramming which nerve cells are connected to others), Brenner called it a "form of insanity."
  • These debates raise an old and general question in the philosophy of science: What makes a satisfying scientific theory or explanation, and how ought success be defined for science?
  • Ever since Isaiah Berlin's famous essay, it has become a favorite pastime of academics to place various thinkers and scientists on the "Hedgehog-Fox" continuum: the Hedgehog, a meticulous and specialized worker, driven by incremental progress in a clearly defined field versus the Fox, a flashier, ideas-driven thinker who jumps from question to question, ignoring field boundaries and applying his or her skills where they seem applicable.
  • Chomsky's work has had tremendous influence on a variety of fields outside his own, including computer science and philosophy, and he has not shied away from discussing and critiquing the influence of these ideas, making him a particularly interesting person to interview.
  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • it has been argued in my view rather plausibly, though neuroscientists don't like it -- that neuroscience for the last couple hundred years has been on the wrong track.
  • neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
Javier E

Addicted to Distraction - The New York Times - 0 views

  • ONE evening early this summer, I opened a book and found myself reading the same paragraph over and over, a half dozen times before concluding that it was hopeless to continue. I simply couldn’t marshal the necessary focus.
  • All my life, reading books has been a deep and consistent source of pleasure, learning and solace. Now the books I regularly purchased were piling up ever higher on my bedside table, staring at me in silent rebuke.
  • Instead of reading them, I was spending too many hours online,
  • ...15 more annotations...
  • “The net is designed to be an interruption system, a machine geared to dividing attention,” Nicholas Carr explains in his book “The Shallows: What the Internet Is Doing to Our Brains.” “We willingly accept the loss of concentration and focus, the division of our attention and the fragmentation of our thoughts, in return for the wealth of compelling or at least diverting information we receive.”
  • Addiction is the relentless pull to a substance or an activity that becomes so compulsive it ultimately interferes with everyday life
  • Denial is any addict’s first defense. No obstacle to recovery is greater than the infinite capacity to rationalize our compulsive behaviors
  • According to one recent survey, the average white-collar worker spends about six hours a day on email.
  • The brain’s craving for novelty, constant stimulation and immediate gratification creates something called a “compulsion loop.” Like lab rats and drug addicts, we need more and more to get the same effect.
  • Endless access to new information also easily overloads our working memory. When we reach cognitive overload, our ability to transfer learning to long-term memory significantly deteriorates.
  • By that definition, nearly everyone I know is addicted in some measure to the Internet. It has arguably replaced work itself as our most socially sanctioned addictio
  • t we humans have a very limited reservoir of will and discipline. We’re far more likely to succeed by trying to change one behavior at a time, ideally at the same time each day, so that it becomes a habit, requiring less and less energy to sustain.
  • Now it was time to detox. I interpreted the traditional second step — belief that a higher power could help restore my sanity — in a more secular way. The higher power became my 30-year-old daughter, who disconnected my phone and laptop from both my email and the Web.
  • During those first few days, I did suffer withdrawal pangs, most of all the hunger to call up Google and search for an answer to some question that arose. But with each passing day offline, I felt more relaxed, less anxious, more able to focus and less hungry for the next shot of instant but short-lived stimulation. What happened to my brain is exactly what I hoped would happen: It began to quiet down.
  • I had brought more than a dozen books of varying difficulty and length on my vacation. I started with short nonfiction, and then moved to longer nonfiction as I began to feel calmer and my focus got stronger. I eventually worked my way up to “The Emperor of All Maladies
  • I am back at work now, and of course I am back online. The Internet isn’t going away, and it will continue to consume a lot of my attention. My aim now is to find the best possible balance between time online and time off
  • I also make it my business now to take on more fully absorbing activities as part of my days. Above all, I’ve kept up reading books, not just because I love them, but also as a continuing attention-building practice.
  • I’ve retained my longtime ritual of deciding the night before on the most important thing I can accomplish the next morning. That’s my first work activity most days, for 60 to 90 minutes without interruption. Afterward, I take a 10- to 15-minute break to quiet my mind and renew my energy.
  • If I have other work during the day that requires sustained focus, I go completely offline for designated periods, repeating my morning ritual. In the evening, when I go up to my bedroom, I nearly always leave my digital devices downstairs.
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
1 - 20 of 31 Next ›
Showing 20 items per page