Skip to main content

Home/ TOK Friends/ Group items matching "think" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
10More

Is Empathy Overrated? | Big Think - 0 views

  • Empathy seems to be a quality you can never overdo. It’s like a megavitamin of emotionally relating: the more you display, the better a human you are.
  • In his last book, Just Babies, he argued humans are born moral, no religion required.
  • Telling someone empathy is overrated is akin to stating puppies are useless and ugly.
  • ...6 more annotations...
  • Empathy is the act of coming to experience the world as you think someone else does … If your suffering makes me suffer, if I feel what you feel, that’s empathy in the sense that I’m interested in here.
  • For example, donating to foreign charities ups our dopamine intake—we feel better because we’re making a difference (which, of course, can make it more about how we feel than who we’re helping).
  • Yet it’s not in our biological inheritance to offer unchecked empathy. Bloom points to our tribal nature as evidence. We’re going to care more for those closest to us, such as family and friends, then Cambodian orphans.
  • Anyone who thinks that it’s important for a therapist to feel depressed or anxious while dealing with depressed or anxious people is missing the point of therapy.
  • Bloom then discusses the difference between what Binghamton professor and Asian Studies scholar Charles Goodman describes as “sentimental compassion” and “great compassion.” The first is similar to empathy, which leads to imbalances in relationships and one’s own psychological state. Simply put, it’s exhausting.
  • Empathy is going to be a buzzword for some time to come. It feeds into our social nature, which Bloom sees nothing wrong with.
  •  
    I found this article very interesting as it talks about how empathy as a emotion is sometimes bad for us. I really like the point when the author mention that the empathy is not in our biological inheritance because our tribal nature is to care more for those closest to us. It is very interesting to think how our modern society shapes our emotions and behavior, and how empathy is gradually becoming our nature. --Sissi (2/22/2017)
4More

First-born children have better thinking skills, study says | Society | The Guardian - 0 views

  • They may be jokingly referred to as PFBs – precious first borns – on popular parenting websites, but a study says first-born children really do reap the benefits of being number one.
  • the first-born generally received more help with tasks that develop thinking skills.
  • The study found parents changed their behaviour as they had more children, giving less mental stimulation and taking part in fewer activities like reading with the child, crafts and playing musical instruments.
  •  
    I find this research interesting. In this research, the researchers did a population observation, which is similar to the population method mentioned in evolutionary biology. The author also discussed a lot of hypothesis why the first born child tends to have better thinking skills. The author don't have direct evidence pointing to his hypothesis, the tendency is a fact. Although there are a lot of uncertainties in this research, this result might appeal to many first born children and make them feel a little more superior. --Sissi (2/9/2017)
13More

How One Psychologist Is Tackling Human Biases in Science - 0 views

  • It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.
  • Peer review seems to be a more fallible instrument—especially in areas such as medicine and psychology—than is often appreciated, as the emerging “crisis of replicability” attests.
  • Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?
  • ...10 more annotations...
  • common response to this situation is to argue that, even if individual scientists might fool themselves, others have no hesitation in critiquing their ideas or their results, and so it all comes out in the wash: Science as a communal activity is self-correcting. Sometimes this is true—but it doesn’t necessarily happen as quickly or smoothly as we might like to believe.
  • The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan
  • He is convinced that the process and progress of science would be smoothed by bringing these biases to light—which means making research more transparent in its methods, assumptions, and interpretations
  • Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea.
  • Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology.
  • Sometimes it seems surprising that science functions at all.
  • Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”).
  • Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught.
  • Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.”
  • Nosek thinks that peer review might sometimes actively hinder clear and swift testing of scientific claims.
10More

When Did 'Ambition' Become a Dirty Word? - The New York Times - 0 views

  • but is instead a stark, black-and-white video, a public service announcement that takes on a thorny issue that dominated the last presidential campaign and has divided people on the right and left.
  • “Embrace Ambition,”
  • “I can think of a lot of dirty words,” Ms. Witherspoon says. “Ambition is not one of them.”
  • ...6 more annotations...
  • Nevertheless, she seemed to choose her words carefully as she spoke about the campaign.
  • she wanted to get away from the idea that this project was politically motivated, or anti-Trump.
  • But the issue of ambition, and the way it is used to defame women, is nevertheless personal to her.
  • This was confusing to Ms. Burch, who never saw herself as being a particularly threatening person.
  • “I do it, too,” she said. “I’m guilty of all of it.”
  • And the word “feminist” began to shed its Bella Abzug and Betty Friedan connotations, as women like Madonna went from saying they are “not feminists” but “humanists” to wearing T-shirts at anti-Trump events that had the word “feminist” emblazoned across the center.
  •  
    Sometimes, people are ashamed of their ambition. They are afraid to have a dream because they would not bear any failure. I think people don't dare to dream big now. I think it might be because of the concept of economics we have in the recent years. Most people follow the efficiency rule so nobody is willing to risk and make a revolution. Putting on big stake is not the most efficient choice in economics. Although it tells us it is okay to be greedy, but it limited us to the model and make us less willingly to make a breakthrough. --Sissi (3/2/2017)
10More

How "Arrival"'s Alien Language Might Actually Make You See the Future | Big Think - 0 views

  • The language we speak can help us see the future.
  • First, learning a language makes you smarter.
  • children learn languages more easily because the plasticity of their developing brains lets them use both hemispheres in language acquisition, while in most adults language is lateralized to one hemisphere - usually the left
  • ...6 more annotations...
  • Second, Time isn’t really linear
  • time is really just “your assessment of how long something took.”
  • Time can also feel faster or slower depending on how we experience it.
  • “Language doesn’t determine how you think, but it can determine how you think about things.”
  • Defining snow with such specific language grants it multiple meanings, and those multiple meanings increase its significance within the culture of the people using the language.
  • sees beyond the linear bounds of time
  •  
    In TOK, we discussed about the what language can affect us. Language doesn't shape who we are, but it oblige us to think in certain ways. By speaking in different languages, we can switch among different mindsets. In this article, the author also uses the example of the descriptive snow words in Eskimos cliche to show that language can enable us to see through different lenses since different languages focus on different stresses and significances. The "time traveler" in the title doesn't give the literal meaning, we get to travel from time to time because we can see beyond linear bounds of time. We are switching perspective from one observer of time to another observer of time. --Sissi (2/13/2017)
8More

Do You Know What You Don't Know? - Art Markman - Harvard Business Review - 0 views

  • You probably don't know as much as you think you do. When put to the test, most people find they can't explain the workings of everyday things they think they understand.
  • Find an object you use daily (a zipper, a toilet, a stereo speaker) and try to describe the particulars of how it works. You're likely to discover unexpected gaps in your knowledge. In psychology, we call this cognitive barrier the illusion of explanatory depth. It means you think you fully understand something that you actually don't.
  • We see this every day in buzz words. Though we often use these words, their meanings are usually unclear. They mask gaps in our knowledge, serving as placeholders that gloss concepts we don't fully understand.
  • ...5 more annotations...
  • an upsetting instance of knowledge gaps in the last decade was the profound misunderstanding of complex financial products that contributed to the market collapse of 2007. Investment banks were unable to protect themselves from exposure to these products, because only a few people (either buyers or sellers) understood exactly what was being sold. Those individuals who did comprehend these product structures ultimately made huge bets against the market using credit-default swaps. The willingness of companies like AIG to sell large quantities of credit-default swaps reflected a gap in their knowledge about the riskiness of products they were insuring.
  • To discover the things you can't explain, take a lesson from teachers. When you instruct someone else, you have to fill the gaps in your own knowledge
  • Explain concepts to yourself as you learn them. Get in the habit of self-teaching. Your explanations will reveal your own knowledge gaps and identify words and concepts whose meanings aren't clear.
  • Engage others in collaborative learning. Help identify the knowledge gaps of the people around you. Ask them to explain difficult concepts, even if you think everyone understands them
  • When you do uncover these gaps, treat them as learning opportunities, not signs of weakness.
6More

The Positive Power of Negative Thinking - NYTimes.com - 0 views

  • visualizing a successful outcome, under certain conditions, can make people less likely to achieve it. She rendered her experimental participants dehydrated, then asked some of them to picture a refreshing glass of water. The water-visualizers experienced a marked decline in energy levels, compared with those participants who engaged in negative or neutral fantasies. Imagining their goal seemed to deprive the water-visualizers of their get-up-and-go, as if they’d already achieved their objective.
  • take affirmations, those cheery slogans intended to lift the user’s mood by repeating them: “I am a lovable person!” “My life is filled with joy!” Psychologists at the University of Waterloo concluded that such statements make people with low self-esteem feel worse
  • Ancient philosophers and spiritual teachers understood the need to balance the positive with the negative, optimism with pessimism, a striving for success and security with an openness to failure and uncertainty
  • ...3 more annotations...
  • Very brief training in meditation, according to a 2009 article in The Journal of Pain, brought significant reductions in pain
  • Buddhist meditation, too, is arguably all about learning to resist the urge to think positively — to let emotions and sensations arise and pass, regardless of their content
  • the relentless cheer of positive thinking begins to seem less like an expression of joy and more like a stressful effort to stamp out any trace of negativity.
6More

Interview: Ted Chiang | The Asian American Literary Review - 0 views

  • I think most people’s ideas of science fiction are formed by Hollywood movies, so they think most science fiction is a special effects-driven story revolving around a battle between good and evil
  • I don’t think of that as a science fiction story. You can tell a good-versus-evil story in any time period and in any setting. Setting it in the future and adding robots to it doesn’t make it a science fiction story.
  • I think science fiction is fundamentally a post-industrial revolution form of storytelling. Some literary critics have noted that the good-versus-evil story follows a pattern where the world starts out as a good place, evil intrudes, the heroes fight and eventually defeat evil, and the world goes back to being a good place. Those critics have said that this is fundamentally a conservative storyline because it’s about maintaining the status quo. This is a common story pattern in crime fiction, too—there’s some disruption to the order, but eventually order is restored. Science fiction offers a different kind of story, a story where the world starts out as recognizable and familiar but is disrupted or changed by some new discovery or technology. At the end of the story, the world is changed permanently. The original condition is never restored. And so in this sense, this story pattern is progressive because its underlying message is not that you should maintain the status quo, but that change is inevitable. The consequences of this new discovery or technology—whether they’re positive or negative—are here to stay and we’ll have to deal with them.
  • ...3 more annotations...
  • There’s also a subset of this progressive story pattern that I’m particularly interested in, and that’s the “conceptual breakthrough” story, where the characters discover something about the nature of the universe which radically expands their understanding of the world.  This is a classic science fiction storyline.
  • one of the cool things about science fiction is that it lets you dramatize the process of scientific discovery, that moment of suddenly understanding something about the universe. That is what scientists find appealing about science, and I enjoy seeing the same thing in science fiction.
  • when you mention myth or mythic structure, yes, I don’t think myths can do that, because in general, myths reflect a pre-industrial view of the world. I don’t know if there is room in mythology for a strong conception of the future, other than an end-of-the-world or Armageddon scenario …
7More

New Thinking and Old Books Revisited - NYTimes.com - 0 views

  • Mark Thoma’s classic crack — “I’ve learned that new economic thinking means reading old books” — has a serious point to it. We’ve had a couple of centuries of economic thought at this point, and quite a few smart people doing the thinking. It’s possible to come up with truly new concepts and approaches, but it takes a lot more than good intentions and casual observation to get there.
  • There is definitely a faction within economics that considers it taboo to introduce anything into its analysis that isn’t grounded in rational behavior and market equilibrium
  • what I do, and what everyone I’ve just named plus many others does, is a more modest, more eclectic form of analysis. You use maximization and equilibrium where it seems reasonably consistent with reality, because of its clarifying power, but you introduce ad hoc deviations where experience seems to demand them — downward rigidity of wages, balance-sheet constraints, bubbles (which are hard to predict, but you can say a lot about their consequences).
  • ...4 more annotations...
  • You may say that what we need is reconstruction from the ground up — an economics with no vestige of equilibrium analysis. Well, show me some results. As it happens, the hybrid, eclectic approach I’ve just described has done pretty well in this crisis, so you had better show me some really superior results before it gets thrown out the window.
  • if you think you’ve found a fundamental logical flaw in one of our workhorse economic models, the odds are very strong that you’ve just made a mistake.
  • it’s quite clear that the teaching of macroeconomics has gone seriously astray. As Saraceno says, the simple models that have proved so useful since 2008 are by and large taught only at the undergrad level — they’re treated as too simple, too ad hoc, whatever, to make it into the grad courses even at places that aren’t very ideological.
  • to temper your modeling with a sense of realism you need to know something about reality — and not just the statistical properties of U.S. time series since 1947. Economic history — global economic history — should be a core part of the curriculum. Nobody should be making pronouncements on macro without knowing a fair bit about the collapse of the gold standard in the 1930s, what actually happened in the stagflation of the 1970s, the Asian financial crisis of the 90s, and, looking forward, the euro crisis.
19More

Grayson Perry's Reith Lectures: Who decides what makes art good? - FT.com - 0 views

  • I think this is one of the most burning issues around art – how do we tell if something is good? And who tells us that it’s good?
  • many of the methods of judging are very problematic and many of the criteria used to assess art are conflicting. We have financial value, popularity, art historical significance, or aesthetic sophistication. All these things could be at odds with each other.
  • A visitor to an exhibition like the Hockney one, if they were judging the quality of the art, might use a word like “beauty”. Now, if you use that kind of word in the art world, be very careful. There will be sucking of teeth and mournful shaking of heads because their hero, the artist Marcel Duchamp, of “urinal” fame, he said, “Aesthetic delectation is the danger to be avoided.” In the art world sometimes it can feel as if to judge something on its beauty, on its aesthetic merits, is as if you’re buying into something politically incorrect, into sexism, into racism, colonialism, class privilege. It almost feels it’s loaded, because where does our idea of beauty come from?
  • ...16 more annotations...
  • beauty is very much about familiarity and it’s reinforcing an idea we have already. It’s like when we go on holiday, all we really want to do is take the photograph that we’ve seen in the brochure. Because our idea of beauty is constructed, by family, friends, education, nationality, race, religion, politics, all these things
  • I have found the 21st-century version of the Venetian secret and it is a mathematical formula. What you do, you get a half-decent, non-offensive kind of idea, then you times it by the number of studio assistants, and then you divide it with an ambitious art dealer, and that equals number of oligarchs and hedge fund managers in the world.
  • the nearest we have to an empirical measure of art that actually does exist is the market. By that reckoning, Cézanne’s “Card Players” is the most beautiful lovely painting in the world. I find it a little bit clunky-kitsch but that’s me. It’s worth $260m.
  • The opposite arguments are that it’s art for art’s sake and that’s a very idealistic position to take. Clement Greenberg, a famous art critic in the 1950s, said that art will always be tied to money by an umbilical cord of gold, either state money or market money. I’m pragmatic about it: one of my favourite quotes is you’ll never have a good art career unless your work fits into the elevator of a New York apartment block.
  • there’s one thing about that red painting that ends up in Sotheby’s. It’s not just any old red painting. It is a painting that has been validated. This is an important word in the art world and the big question is: who validates? There is quite a cast of characters in this validation chorus that will kind of decide what is good art. They are a kind of panel, if you like, that decides on what is good quality, what are we going to end up looking at?
  • They include artists, teachers, dealers, collectors, critics, curators, the media, even the public maybe. And they form this lovely consensus around what is good art.
  • there were four stages to the rise of an artist. Peers, serious critics and collectors, dealers, then the public.
  • Another member of that cast of validating characters is the collectors. In the 1990s, if Charles Saatchi just put his foot over the threshold of your exhibition, that was it. The media was agog and he would come in and Hoover it up. You do want the heavyweight collector to buy your work because that gives it kudos. You don’t want a tacky one who is just buying it to glitz up their hallway.
  • The next part of this chorus of validation are the dealers. A good dealer brand has a very powerful effect on the reputation of the artist; they form a part of placing the work. This is a slightly mysterious process that many people don’t quite understand but a dealer will choose where your work goes so it gains the brownie points, so the buzz around it goes up.
  • now, of course, galleries like the Tate Modern want a big name because visitor numbers, in a way, are another empirical measure of quality. So perhaps at the top of the tree of the validation cast are the curators, and in the past century they have probably become the most powerful giver-outers of brownie points in the art world.
  • ach of the encounters with these members of the cast of validation bestows upon the work, and on the artist, a patina, and what makes that patina is all these hundreds of little conversations and reviews and the good prices over time. These are the filters that pass a work of art through into the canon.
  • So what does this lovely consensus, that all these people are bestowing on this artwork, that anoints it with the quality that we all want, boil down to? I think in many ways what it boils down to is seriousness. That’s the most valued currency in the art world.
  • The whole idea of quality now seems to be contested, as if you’re buying into the language of the elite by saying, “Oh, that’s very good.” How you might judge this work is really problematic because to say it’s not beautiful is to put the wrong kind of criteria on it. You might say, “Oh, it’s dull!” [And people will say] “Oh, you’re just not understanding it with the right terms.” So I think, “Well, how do we judge these things?” Because a lot of them are quite politicised. There’s quite a right-on element to them, so do we judge them on how ethical they are, or how politically right-on they are?
  • What I am attempting to explain is how the art we see in museums and in galleries around the world, and in biennales – how it ends up there, how it gets chosen. In the end, if enough of the right people think it’s good, that’s all there is to it. But, as Alan Bennett said when he was a trustee of the National Gallery, they should put a big sign up outside saying: “You don’t have to like it all.”
  • Or then again I might say, “Well, what do I judge them against?” Do I judge them against government policy? Do I judge them against reality TV? Because that does participation very well. So, in the end, what do we do? What happens to this sort of art when it doesn’t have validation? What is it left with? It’s left with popularity.
  • Then, of course, the next group of people we might think about in deciding what is good art is the public. Since the mid-1990s, art has got a lot more media attention. But popularity has always been a quite dodgy quality [to have]. The highbrow critics will say, “Oh, he’s a bit of a celebrity,” and they turn their noses up about people who are well known to the public
11More

Do Our Bones Influence Our Minds? : The New Yorker - 0 views

  • But their skeletons appeared essentially normal, he says, a result that left him “deeply depressed.”
  • It turns out that osteocalcin is a messenger, sent by bone to regulate crucial processes all over the body.
  • The finding represents new ground in how researchers view the skeleton: not only do bones provide structural support and serve as a repository for calcium and phosphate, they issue commands to far-flung cells
  • ...8 more annotations...
  • “This is a biggie,” said Eric Kandel, the neuroscientist and Nobel Laureate. “Who thinks of the bone as being an endocrine organ? You think of the adrenal gland, you think of the pituitary, you don’t think of bone.”
  • he most recent finding concerns the skeleton and the brain.
  • Karsenty showed that bone plays a direct role in memory and mood. Mice whose skeletons did not produce osteocalcin as a result of genetic manipulation were anxious, depressed, and almost completely unable to master a test of spatial memory. When Karsenty infused them with the missing hormone, however, their moods improved and their performance on the memory test became nearly normal. He also found that, in pregnant mice, osteocalcin from the mother’s bones crossed the placenta and helped shape the development of the fetus’s brain. In other words, bones talk to neurons even before birth.
  • As we age, our bone mass decreases. Memory loss, anxiety, and depression also become more common. These may be separate, unfortunate facts about getting old, but they could also be related.
  • Even more fantastically: Would it ever be possible to protect memory or treat age-related cognitive decline with a skeletal hormone? These are the kinds of questions that can spur either false hopes or imaginative leaps.
  • “I don’t know of any hormone that functions in mice but not to some extent in humans,” Thomas Clemens, of Johns Hopkins, told me in 2011
  • ne tantalizing hint comes from men who are unable to respond to the hormone as a result of a genetic mutation
  • Karsenty also believes that we know enough now to recognize that the body is far more networked and interconnected than most people think. “No organ is an island,” he likes to say.
17More

How the Internet Gets Inside Us : The New Yorker - 0 views

  • It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness.
  • ...14 more annotations...
  • In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.”
  • when people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix.
  • The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television (and Canadian television, at that) in 1965.
  • If all you have is a hammer, the saying goes, everything looks like a nail; and, if you think the world is broken, every machine looks like the hammer that broke it.
  • Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began.
  • Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers—as, for that matter, did the Hermione-like idea of “looking things up.” That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
  • What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interactions with other people—are now easily internalized, made to feel like mere workings of the id left on its own.
  • A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them.
  • And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.
  • Now television is the harmless little fireplace over in the corner, where the family gathers to watch “Entourage.” TV isn’t just docile; it’s positively benevolent. This makes you think that what made television so evil back when it was evil was not its essence but its omnipresence. Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user.
  • the Internet screen has always been like the palantír in Tolkien’s “Lord of the Rings”—the “seeing stone” that lets the wizards see the entire world. Its gift is great; the wizard can see it all. Its risk is real: evil things will register more vividly than the great mass of dull good. The peril isn’t that users lose their knowledge of the world. It’s that they can lose all sense of proportion. You can come to think that the armies of Mordor are not just vast and scary, which they are, but limitless and undefeatable, which they aren’t.
17More

Our Machine Masters - NYTimes.com - 0 views

  • the smart machines of the future won’t be humanlike geniuses like HAL 9000 in the movie “2001: A Space Odyssey.” They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. “Everything that we formerly electrified we will now cognitize,” Kelly writes. Even more than today, we’ll lead our lives enmeshed with machines that do some of our thinking tasks for us.
  • This artificial intelligence breakthrough, he argues, is being driven by cheap parallel computation technologies, big data collection and better algorithms. The upshot is clear, “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add A.I.”
  • Two big implications flow from this. The first is sociological. If knowledge is power, we’re about to see an even greater concentration of power.
  • ...14 more annotations...
  • in 2001, the top 10 websites accounted for 31 percent of all U.S. page views, but, by 2010, they accounted for 75 percent of them.
  • As a result, our A.I. future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”
  • Advances in artificial intelligence will accelerate this centralizing trend. That’s because A.I. companies will be able to reap the rewards of network effects. The bigger their network and the more data they collect, the more effective and attractive they become.
  • The Internet has created a long tail, but almost all the revenue and power is among the small elite at the head.
  • engineers at a few gigantic companies will have vast-though-hidden power to shape how data are collected and framed, to harvest huge amounts of information, to build the frameworks through which the rest of us make decisions and to steer our choices. If you think this power will be used for entirely benign ends, then you have not read enough history.
  • The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do
  • On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments.
  • For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math.
  • In the age of smart machines, we’re not human because we have big brains. We’re human because we have social skills, emotional capacities and moral intuitions.
  • I could paint two divergent A.I. futures, one deeply humanistic, and one soullessly utilitarian.
  • In the humanistic one, machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much.
  • In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it
  • In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.
  • In the current issue of Wired, the technology writer Kevin Kelly says that we had all better get used to this level of predictive prowess. Kelly argues that the age of artificial intelligence is finally at hand.
15More

The Mental Virtues - NYTimes.com - 0 views

  • Even if you are alone in your office, you are thinking. thinking well under a barrage of information may be a different sort of moral challenge than fighting well under a hail of bullets, but it’s a character challenge nonetheless.
  • some of the cerebral virtues. We can all grade ourselves on how good we are at each of them.
  • love of learning. Some people are just more ardently curious than others, either by cultivation or by nature.
  • ...12 more annotations...
  • courage. The obvious form of intellectual courage is the willingness to hold unpopular views. But the subtler form is knowing how much risk to take in jumping to conclusions.
  • Intellectual courage is self-regulation, Roberts and Wood argue, knowing when to be daring and when to be cautious. The philosopher Thomas Kuhn pointed out that scientists often simply ignore facts that don’t fit with their existing paradigms, but an intellectually courageous person is willing to look at things that are surprisingly hard to look at.
  • The median point between flaccidity and rigidity is the virtue of firmness. The firm believer can build a steady worldview on solid timbers but still delight in new information. She can gracefully adjust the strength of her conviction to the strength of the evidence. Firmness is a quality of mental agility.
  • humility, which is not letting your own desire for status get in the way of accuracy. The humble person fights against vanity and self-importance.
  • The humble researcher doesn’t become arrogant toward his subject, assuming he has mastered it. Such a person is open to learning from anyone at any stage in life.
  • autonomy
  • Autonomy is the median of knowing when to bow to authority and when not to, when to follow a role model and when not to, when to adhere to tradition and when not to.
  • generosity. This virtue starts with the willingness to share knowledge and give others credit. But it also means hearing others as they would like to be heard, looking for what each person has to teach and not looking to triumphantly pounce upon their errors.
  • thinking well means pushing against the grain of our nature — against vanity, against laziness, against the desire for certainty, against the desire to avoid painful truths. Good thinking isn’t just adopting the right technique. It’s a moral enterprise and requires good character, the ability to go against our lesser impulses for the sake of our higher ones.
  • wisdom isn’t a body of information. It’s the moral quality of knowing how to handle your own limitations.
  • Warren Buffett made a similar point in his own sphere, “Investing is not a game where the guy with the 160 I.Q. beats the guy with the 130 I.Q. Once you have ordinary intelligence, what you need is the temperament to control the urges that get other people into trouble.”
  • Good piece. I only wish David had written more about all the forces that work _against_ the virtues he describes. The innumerable examples of corporate suppression/spin of "inconvenient" truths (i.e, GM, Toyota, et al); the virtual acceptance that lying is a legitimate tactic in political campaigns; our preoccupation with celebrity, appearances, and "looking good" in every imaginable transaction; make the quiet virtues that DB describes even more heroic than he suggests.
19More

Faith vs. Facts - NYTimes.com - 0 views

  • a broad group of scholars is beginning to demonstrate that religious belief and factual belief are indeed different kinds of mental creatures.
  • People process evidence differently when they think with a factual mind-set rather than with a religious mind-set
  • Even what they count as evidence is different
  • ...16 more annotations...
  • And they are motivated differently, based on what they conclude.
  • On what grounds do scholars make such claims?
  • the very language people use changes when they talk about religious beings, and the changes mean that they think about their realness differently.
  • to say, “I believe that Jesus Christ is alive” signals that you know that other people might not think so. It also asserts reverence and piety
  • We seem to regard religious beliefs and factual beliefs with what the philosopher Neil Van Leeuwen calls different “cognitive attitudes.”
  • when people consider the truth of a religious belief, what the belief does for their lives matters more than, well, the facts.
  • We evaluate religious beliefs more with our sense of destiny, purpose and the way we think the world should be
  • religious and factual beliefs play different roles in interpreting the same events. Religious beliefs explain why, rather than how
  • people’s reliance on supernatural explanations increases as they age.
  • It’s the young kids who seem skeptical when researchers ask them about gods and ancestors, and the adults who seem clear and firm. It seems that supernatural ideas do things for adults they do not yet do for children.
  • people don’t use rational, instrumental reasoning when they deal with religious beliefs
  • sacred values are immune to the normal cost-benefit trade-offs that govern other dimensions of our lives.
  • The danger point seems to be when people feel themselves to be completely fused with a group defined by its sacred value.
  • One of the interesting things about sacred values, however, is that they are both general (“I am a true Christian”) and particular (“I believe that abortion is murder”)
  • It is possible that this is the key to effective negotiation, because the ambiguity allows the sacred value to be reframed without losing its essential truth
  • these new ideas about religious belief should shape the way people negotiate about ownership of the land, just as they should shape the way we think about climate change deniers and vaccine avoiders. People aren’t dumb in not recognizing the facts. They are using a reasoning process that responds to moral arguments more than scientific ones, and we should understand that when we engage.
37More

Technology Imperialism, the Californian Ideology, and the Future of Higher Education - 2 views

  • What I hope to make explicit today is how much California – the place, the concept, “the dream machine” – shapes (wants to shape) the future of technology and the future of education.
  • In an announcement on Facebook – of course – Zuckerberg argued that “connectivity is a human right.”
  • As Zuckerberg frames it at least, the “human right” in this case is participation in the global economy
  • ...34 more annotations...
  • This is a revealing definition of “human rights,” I’d argue, particularly as it’s one that never addresses things like liberty, equality, or justice. It never addresses freedom of expression or freedom of assembly or freedom of association.
  • in certain countries, a number of people say they do not use the Internet yet they talk about how much time they spend on Facebook. According to one survey, 11% of Indonesians who said they used Facebook also said they did not use the Internet. A survey in Nigeria had similar results:
  • Evgeny Morozov has described this belief as “Internet-centrism,” an ideology he argues permeates the tech industry, its PR wing the tech blogosphere, and increasingly government policy
  • “Internet-centrism” describes the tendency to see “the Internet” – Morozov uses quotations around the phrase – as a new yet unchanging, autonomous, benevolent, and inevitable socio-technological development. “The Internet” is a master framework for how all institutions will supposedly operate moving forward
  • “The opportunity to connect” as a human right assumes that “connectivity” will hasten the advent of these other rights, I suppose – that the Internet will topple dictatorships, for example, that it will extend participation in civic life to everyone and, for our purposes here at this conference, that it will “democratize education.”
  • Empire is not simply an endeavor of the nation-state – we have empire through technology (that’s not new) and now, the technology industry as empire.
  • Facebook is really just synecdochal here, I should add – just one example of the forces I think are at play, politically, economically, technologically, culturally.
  • it matters at the level of ideology. Infrastructure is ideological, of course. The new infrastructure – “the Internet” if you will – has a particular political, economic, and cultural bent to it. It is not neutral.
  • This infrastructure matters. In this case, this is a French satellite company (Eutelsat). This is an American social network (Facebook). Mark Zuckerberg’s altruistic rhetoric aside, this is their plan – an economic plan – to monetize the world’s poor.
  • The content and the form of “connectivity” perpetuate imperialism, and not only in Africa but in all of our lives. Imperialism at the level of infrastructure – not just cultural imperialism but technological imperialism
  • “The Silicon Valley Narrative,” as I call it, is the story that the technology industry tells about the world – not only the world-as-is but the world-as-Silicon-Valley-wants-it-to-be.
  • To better analyze and assess both technology and education technology requires our understanding of these as ideological, argues Neil Selwyn – “‘a site of social struggle’ through which hegemonic positions are developed, legitimated, reproduced and challenged.”
  • This narrative has several commonly used tropes
  • It often features a hero: the technology entrepreneur. Smart. Independent. Bold. Risk-taking. White. Male
  • “The Silicon Valley narrative” invokes themes like “innovation” and “disruption.” It privileges the new; everything else that can be deemed “old” is viewed as obsolete.
  • It contends that its workings are meritocratic: anyone who hustles can make it.
  • “The Silicon Valley narrative” fosters a distrust of institutions – the government, the university. It is neoliberal. It hates paying taxes.
  • “The Silicon Valley narrative” draws from the work of Ayn Rand; it privileges the individual at all costs; it calls this “personalization.”
  • “The Silicon Valley narrative” does not neatly co-exist with public education. We forget this at our peril. This makes education technology, specifically, an incredibly fraught area.
  • Here’s the story I think we like to hear about ed-tech, about distance education, about “connectivity” and learning: Education technology is supportive, not exploitative. Education technology opens, not forecloses, opportunities. Education technology is driven by a rethinking of teaching and learning, not expanding markets or empire. Education technology meets individual and institutional and community goals.
  • That’s not really what the “Silicon Valley narrative” says about education
  • It is interested in data extraction and monetization and standardization and scale. It is interested in markets and return on investment. “Education is broken,” and technology will fix it
  • If “Silicon Valley” isn’t quite accurate, then I must admit that the word “narrative” is probably inadequate too
  • The better term here is “ideology.”
  • Facebook is “the Internet” for a fairly sizable number of people. They know nothing else – conceptually, experientially. And, let’s be honest, Facebook wants to be “the Internet” for everyone.
  • We tend to not see technology as ideological – its connections to libertarianism, neoliberalism, global capitalism, empire.
  • The California ideology ignores race and labor and the water supply; it is sustained by air and fantasy. It is built upon white supremacy and imperialism.
  • As is the technology sector, which has its own history, of course, in warfare and cryptography.
  • So far this year, some $3.76 billion of venture capital has been invested in education technology – a record-setting figure. That money will change the landscape – that’s its intention. That money carries with it a story about the future; it carries with it an ideology.
  • When a venture capitalist says that “software is eating the world,” we can push back on the inevitability implied in that. We can resist – not in the name of clinging to “the old” as those in educational institutions are so often accused of doing – but we can resist in the name of freedom and justice and a future that isn’t dictated by the wealthiest white men in Hollywood or Silicon Valley.
  • We in education would be naive, I think, to think that the designs that venture capitalists and technology entrepreneurs have for us would be any less radical than creating a new state, like Draper’s proposed state of Silicon Valley, that would enormously wealthy and politically powerful.
  • When I hear talk of “unbundling” in education – one of the latest gerunds you’ll hear venture capitalists and ed-tech entrepreneurs invoke, meaning the disassembling of institutions into products and services – I can’t help but think of the “unbundling” that Draper wished to do to my state: carving up land and resources, shifting tax revenue and tax burdens, creating new markets, privatizing public institutions, redistributing power and doing so explicitly not in the service of equity or justice.
  • I want to show you this map, a proposal – a failed proposal, thankfully – by venture capitalist Tim Draper to split the state of California into six separate states: Jefferson, North California, Silicon Valley, Central California, West California, and South California. The proposal, which Draper tried to collect enough signatures to get on the ballot in California, would have created the richest state in the US – Silicon Valley would be first in per-capita income. It would also have created the nation’s poorest state, Central California, which would rank even below Mississippi.
  • that’s not all that Silicon Valley really does.
32More

Is Science Kind of a Scam? - The New Yorker - 1 views

  • No well-tested scientific concept is more astonishing than the one that gives its name to a new book by the Scientific American contributing editor George Musser, “Spooky Action at a Distance
  • The ostensible subject is the mechanics of quantum entanglement; the actual subject is the entanglement of its observers.
  • his question isn’t so much how this weird thing can be true as why, given that this weird thing had been known about for so long, so many scientists were so reluctant to confront it. What keeps a scientific truth from spreading?
  • ...29 more annotations...
  • it is as if two magic coins, flipped at different corners of the cosmos, always came up heads or tails together. (The spooky action takes place only in the context of simultaneous measurement. The particles share states, but they don’t send signals.)
  • fashion, temperament, zeitgeist, and sheer tenacity affected the debate, along with evidence and argument.
  • The certainty that spooky action at a distance takes place, Musser says, challenges the very notion of “locality,” our intuitive sense that some stuff happens only here, and some stuff over there. What’s happening isn’t really spooky action at a distance; it’s spooky distance, revealed through an action.
  • Why, then, did Einstein’s question get excluded for so long from reputable theoretical physics? The reasons, unfolding through generations of physicists, have several notable social aspects,
  • What started out as a reductio ad absurdum became proof that the cosmos is in certain ways absurd. What began as a bug became a feature and is now a fact.
  • “If poetry is emotion recollected in tranquility, then science is tranquility recollected in emotion.” The seemingly neutral order of the natural world becomes the sounding board for every passionate feeling the physicist possesses.
  • Musser explains that the big issue was settled mainly by being pushed aside. Generational imperatives trumped evidentiary ones. The things that made Einstein the lovable genius of popular imagination were also the things that made him an easy object of condescension. The hot younger theorists patronized him,
  • There was never a decisive debate, never a hallowed crucial experiment, never even a winning argument to settle the case, with one physicist admitting, “Most physicists (including me) accept that Bohr won the debate, although like most physicists I am hard pressed to put into words just how it was done.”
  • Arguing about non-locality went out of fashion, in this account, almost the way “Rock Around the Clock” displaced Sinatra from the top of the charts.
  • The same pattern of avoidance and talking-past and taking on the temper of the times turns up in the contemporary science that has returned to the possibility of non-locality.
  • the revival of “non-locality” as a topic in physics may be due to our finding the metaphor of non-locality ever more palatable: “Modern communications technology may not technically be non-local but it sure feels that it is.”
  • Living among distant connections, where what happens in Bangalore happens in Boston, we are more receptive to the idea of such a strange order in the universe.
  • The “indeterminacy” of the atom was, for younger European physicists, “a lesson of modernity, an antidote to a misplaced Enlightenment trust in reason, which German intellectuals in the 1920’s widely held responsible for their country’s defeat in the First World War.” The tonal and temperamental difference between the scientists was as great as the evidence they called on.
  • Science isn’t a slot machine, where you drop in facts and get out truths. But it is a special kind of social activity, one where lots of different human traits—obstinacy, curiosity, resentment of authority, sheer cussedness, and a grudging readiness to submit pet notions to popular scrutiny—end by producing reliable knowledge
  • What was magic became mathematical and then mundane. “Magical” explanations, like spooky action, are constantly being revived and rebuffed, until, at last, they are reinterpreted and accepted. Instead of a neat line between science and magic, then, we see a jumpy, shifting boundary that keeps getting redrawn
  • Real-world demarcations between science and magic, Musser’s story suggests, are like Bugs’s: made on the move and as much a trap as a teaching aid.
  • In the past several decades, certainly, the old lines between the history of astrology and astronomy, and between alchemy and chemistry, have been blurred; historians of the scientific revolution no longer insist on a clean break between science and earlier forms of magic.
  • Where once logical criteria between science and non-science (or pseudo-science) were sought and taken seriously—Karl Popper’s criterion of “falsifiability” was perhaps the most famous, insisting that a sound theory could, in principle, be proved wrong by one test or another—many historians and philosophers of science have come to think that this is a naïve view of how the scientific enterprise actually works.
  • They see a muddle of coercion, old magical ideas, occasional experiment, hushed-up failures—all coming together in a social practice that gets results but rarely follows a definable logic.
  • Yet the old notion of a scientific revolution that was really a revolution is regaining some credibility.
  • David Wootton, in his new, encyclopedic history, “The Invention of Science” (Harper), recognizes the blurred lines between magic and science but insists that the revolution lay in the public nature of the new approach.
  • What killed alchemy was the insistence that experiments must be openly reported in publications which presented a clear account of what had happened, and they must then be replicated, preferably before independent witnesses.
  • Wootton, while making little of Popper’s criterion of falsifiability, makes it up to him by borrowing a criterion from his political philosophy. Scientific societies are open societies. One day the lunar tides are occult, the next day they are science, and what changes is the way in which we choose to talk about them.
  • Wootton also insists, against the grain of contemporary academia, that single observed facts, what he calls “killer facts,” really did polish off antique authorities
  • once we agree that the facts are facts, they can do amazing work. Traditional Ptolemaic astronomy, in place for more than a millennium, was destroyed by what Galileo discovered about the phases of Venus. That killer fact “serves as a single, solid, and strong argument to establish its revolution around the Sun, such that no room whatsoever remains for doubt,” Galileo wrote, and Wootton adds, “No one was so foolish as to dispute these claims.
  • everal things flow from Wootton’s view. One is that “group think” in the sciences is often true think. Science has always been made in a cloud of social networks.
  • There has been much talk in the pop-sci world of “memes”—ideas that somehow manage to replicate themselves in our heads. But perhaps the real memes are not ideas or tunes or artifacts but ways of making them—habits of mind rather than products of mind
  • science, then, a club like any other, with fetishes and fashions, with schemers, dreamers, and blackballed applicants? Is there a real demarcation to be made between science and every other kind of social activity
  • The claim that basic research is valuable because it leads to applied technology may be true but perhaps is not at the heart of the social use of the enterprise. The way scientists do think makes us aware of how we can think
9More

The Joy of Psyching Myself Out­ - The New York Times - 0 views

  • IS it possible to think scientifically and creatively at once? Can you be both a psychologist and a writer?
  • “A writer must be as objective as a chemist,” Anton Chekhov wrote in 1887. “He must abandon the subjective line; he must know that dung heaps play a very reasonable part in a landscape.”Chekhov’s chemist is a naturalist — someone who sees reality for what it is, rather than what it should be. In that sense, the starting point of the psychologist and the writer is the same: a curiosity that leads you to observe life in all its dimensions.
  • Without verification, we can’t always trust what we see — or rather, what we think we see. Whether we’re psychologists or writers (or anything else), our eyes are never the impartial eyes of Chekhov’s chemist. Our expectations, our wants and shoulds, get in the way. Take, once again, lying. Why do we think we know how liars behave? Liars should divert their eyes. They should feel ashamed and guilty and show the signs of discomfort that such feelings engender. And because they should, we think they do.
  • ...6 more annotations...
  • The desire for the world to be what it ought to be and not what it is permeates experimental psychology as much as writing, though. There’s experimental bias and the problem known in the field as “demand characteristics” — when researchers end up finding what they want to find by cuing participants to act a certain way. It’s also visible when psychologists choose to study one thing rather than another, dismiss evidence that doesn’t mesh with their worldview while embracing that which does. The subjectivity we tend to associate with the writerly way of looking may simply be more visible in that realm rather than exclusive to it.
  • “There is no other source of knowledge of the universe but the intellectual manipulation of carefully verified observations,” he said.
  • Intuition and inspiration, he went on, “can safely be counted as illusions, as fulfillments of wishes.” They are not to be relied on as evidence of any sort. “Science takes account of the fact that the mind of man creates such demands and is ready to trace their source, but it has not the slightest ground for thinking them justified.”
  • That is what both the psychologist and the writer should strive for: a self-knowledge that allows you to look in order to discover, without agenda, without preconception, without knowing or caring if what you’re seeing is wrong or right in your scheme of the world. It’s harder than it sounds. For one thing, you have to possess the self-knowledge that will allow you to admit when you’re wrong.
  • most new inquiries never happened — in a sense, it meant that objectivity was more an ideal than a reality. Each study was selected for a reason other than intrinsic interest.
  • Isolation precludes objectivity. It’s in the merging not simply of ways of seeing but also of modes of thought that a truly whole perception of reality may eventually emerge. Or at least that way we can realize its ultimate impossibility — and that’s not nothing, either.
91More

The Tech Industry's Psychological War on Kids - Member Feature Stories - Medium - 0 views

  • she cried, “They took my f***ing phone!” Attempting to engage Kelly in conversation, I asked her what she liked about her phone and social media. “They make me happy,” she replied.
  • Even though they were loving and involved parents, Kelly’s mom couldn’t help feeling that they’d failed their daughter and must have done something terribly wrong that led to her problems.
  • My practice as a child and adolescent psychologist is filled with families like Kelly’s. These parents say their kids’ extreme overuse of phones, video games, and social media is the most difficult parenting issue they face — and, in many cases, is tearing the family apart.
  • ...88 more annotations...
  • What none of these parents understand is that their children’s and teens’ destructive obsession with technology is the predictable consequence of a virtually unrecognized merger between the tech industry and psychology.
  • Dr. B.J. Fogg, is a psychologist and the father of persuasive technology, a discipline in which digital machines and apps — including smartphones, social media, and video games — are configured to alter human thoughts and behaviors. As the lab’s website boldly proclaims: “Machines designed to change humans.”
  • These parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit.
  • psychology — a discipline that we associate with healing — is now being used as a weapon against children.
  • This alliance pairs the consumer tech industry’s immense wealth with the most sophisticated psychological research, making it possible to develop social media, video games, and phones with drug-like power to seduce young users.
  • Likewise, social media companies use persuasive design to prey on the age-appropriate desire for preteen and teen kids, especially girls, to be socially successful. This drive is built into our DNA, since real-world relational skills have fostered human evolution.
  • Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”
  • Persuasive technology (also called persuasive design) works by deliberately creating digital environments that users feel fulfill their basic human drives — to be social or obtain goals — better than real-world alternatives.
  • Kids spend countless hours in social media and video game environments in pursuit of likes, “friends,” game points, and levels — because it’s stimulating, they believe that this makes them happy and successful, and they find it easier than doing the difficult but developmentally important activities of childhood.
  • While persuasion techniques work well on adults, they are particularly effective at influencing the still-maturing child and teen brain.
  • “Video games, better than anything else in our culture, deliver rewards to people, especially teenage boys,” says Fogg. “Teenage boys are wired to seek competency. To master our world and get better at stuff. Video games, in dishing out rewards, can convey to people that their competency is growing, you can get better at something second by second.”
  • it’s persuasive design that’s helped convince this generation of boys they are gaining “competency” by spending countless hours on game sites, when the sad reality is they are locked away in their rooms gaming, ignoring school, and not developing the real-world competencies that colleges and employers demand.
  • Persuasive technologies work because of their apparent triggering of the release of dopamine, a powerful neurotransmitter involved in reward, attention, and addiction.
  • As she says, “If you don’t get 100 ‘likes,’ you make other people share it so you get 100…. Or else you just get upset. Everyone wants to get the most ‘likes.’ It’s like a popularity contest.”
  • there are costs to Casey’s phone obsession, noting that the “girl’s phone, be it Facebook, Instagram or iMessage, is constantly pulling her away from her homework, sleep, or conversations with her family.
  • Casey says she wishes she could put her phone down. But she can’t. “I’ll wake up in the morning and go on Facebook just… because,” she says. “It’s not like I want to or I don’t. I just go on it. I’m, like, forced to. I don’t know why. I need to. Facebook takes up my whole life.”
  • B.J. Fogg may not be a household name, but Fortune Magazine calls him a “New Guru You Should Know,” and his research is driving a worldwide legion of user experience (UX) designers who utilize and expand upon his models of persuasive design.
  • “No one has perhaps been as influential on the current generation of user experience (UX) designers as Stanford researcher B.J. Fogg.”
  • the core of UX research is about using psychology to take advantage of our human vulnerabilities.
  • As Fogg is quoted in Kosner’s Forbes article, “Facebook, Twitter, Google, you name it, these companies have been using computers to influence our behavior.” However, the driving force behind behavior change isn’t computers. “The missing link isn’t the technology, it’s psychology,” says Fogg.
  • UX researchers not only follow Fogg’s design model, but also his apparent tendency to overlook the broader implications of persuasive design. They focus on the task at hand, building digital machines and apps that better demand users’ attention, compel users to return again and again, and grow businesses’ bottom line.
  • the “Fogg Behavior Model” is a well-tested method to change behavior and, in its simplified form, involves three primary factors: motivation, ability, and triggers.
  • “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.”
  • Regarding ability, Fogg suggests that digital products should be made so that users don’t have to “think hard.” Hence, social networks are designed for ease of use
  • Finally, Fogg says that potential users need to be triggered to use a site. This is accomplished by a myriad of digital tricks, including the sending of incessant notifications
  • moral questions about the impact of turning persuasive techniques on children and teens are not being asked. For example, should the fear of social rejection be used to compel kids to compulsively use social media? Is it okay to lure kids away from school tasks that demand a strong mental effort so they can spend their lives on social networks or playing video games that don’t make them think much at all?
  • Describing how his formula is effective at getting people to use a social network, the psychologist says in an academic paper that a key motivator is users’ desire for “social acceptance,” although he says an even more powerful motivator is the desire “to avoid being socially rejected.”
  • the startup Dopamine Labs boasts about its use of persuasive techniques to increase profits: “Connect your app to our Persuasive AI [Artificial Intelligence] and lift your engagement and revenue up to 30% by giving your users our perfect bursts of dopamine,” and “A burst of Dopamine doesn’t just feel good: it’s proven to re-wire user behavior and habits.”
  • Ramsay Brown, the founder of Dopamine Labs, says in a KQED Science article, “We have now developed a rigorous technology of the human mind, and that is both exciting and terrifying. We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behavior in ways that, unbeknownst to them, feel second-nature but are really by design.”
  • Programmers call this “brain hacking,” as it compels users to spend more time on sites even though they mistakenly believe it’s strictly due to their own conscious choices.
  • Banks of computers employ AI to “learn” which of a countless number of persuasive design elements will keep users hooked
  • A persuasion profile of a particular user’s unique vulnerabilities is developed in real time and exploited to keep users on the site and make them return again and again for longer periods of time. This drives up profits for consumer internet companies whose revenue is based on how much their products are used.
  • “The leaders of Internet companies face an interesting, if also morally questionable, imperative: either they hijack neuroscience to gain market share and make large profits, or they let competitors do that and run away with the market.”
  • Social media and video game companies believe they are compelled to use persuasive technology in the arms race for attention, profits, and survival.
  • Children’s well-being is not part of the decision calculus.
  • one breakthrough occurred in 2017 when Facebook documents were leaked to The Australian. The internal report crafted by Facebook executives showed the social network boasting to advertisers that by monitoring posts, interactions, and photos in real time, the network is able to track when teens feel “insecure,” “worthless,” “stressed,” “useless” and a “failure.”
  • The report also bragged about Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.”
  • These design techniques provide tech corporations a window into kids’ hearts and minds to measure their particular vulnerabilities, which can then be used to control their behavior as consumers. This isn’t some strange future… this is now.
  • The official tech industry line is that persuasive technologies are used to make products more engaging and enjoyable. But the revelations of industry insiders can reveal darker motives.
  • Revealing the hard science behind persuasive technology, Hopson says, “This is not to say that players are the same as rats, but that there are general rules of learning which apply equally to both.”
  • After penning the paper, Hopson was hired by Microsoft, where he helped lead the development of the Xbox Live, Microsoft’s online gaming system
  • “If game designers are going to pull a person away from every other voluntary social activity or hobby or pastime, they’re going to have to engage that person at a very deep level in every possible way they can.”
  • This is the dominant effect of persuasive design today: building video games and social media products so compelling that they pull users away from the real world to spend their lives in for-profit domains.
  • Persuasive technologies are reshaping childhood, luring kids away from family and schoolwork to spend more and more of their lives sitting before screens and phones.
  • “Since we’ve figured to some extent how these pieces of the brain that handle addiction are working, people have figured out how to juice them further and how to bake that information into apps.”
  • Today, persuasive design is likely distracting adults from driving safely, productive work, and engaging with their own children — all matters which need urgent attention
  • Still, because the child and adolescent brain is more easily controlled than the adult mind, the use of persuasive design is having a much more hurtful impact on kids.
  • But to engage in a pursuit at the expense of important real-world activities is a core element of addiction.
  • younger U.S. children now spend 5 ½ hours each day with entertainment technologies, including video games, social media, and online videos.
  • Even more, the average teen now spends an incredible 8 hours each day playing with screens and phones
  • U.S. kids only spend 16 minutes each day using the computer at home for school.
  • Quietly, using screens and phones for entertainment has become the dominant activity of childhood.
  • Younger kids spend more time engaging with entertainment screens than they do in school
  • teens spend even more time playing with screens and phones than they do sleeping
  • kids are so taken with their phones and other devices that they have turned their backs to the world around them.
  • many children are missing out on real-life engagement with family and school — the two cornerstones of childhood that lead them to grow up happy and successful
  • persuasive technologies are pulling kids into often toxic digital environments
  • A too frequent experience for many is being cyberbullied, which increases their risk of skipping school and considering suicide.
  • And there is growing recognition of the negative impact of FOMO, or the fear of missing out, as kids spend their social media lives watching a parade of peers who look to be having a great time without them, feeding their feelings of loneliness and being less than.
  • The combined effects of the displacement of vital childhood activities and exposure to unhealthy online environments is wrecking a generation.
  • as the typical age when kids get their first smartphone has fallen to 10, it’s no surprise to see serious psychiatric problems — once the domain of teens — now enveloping young kids
  • Self-inflicted injuries, such as cutting, that are serious enough to require treatment in an emergency room, have increased dramatically in 10- to 14-year-old girls, up 19% per year since 2009.
  • While girls are pulled onto smartphones and social media, boys are more likely to be seduced into the world of video gaming, often at the expense of a focus on school
  • it’s no surprise to see this generation of boys struggling to make it to college: a full 57% of college admissions are granted to young women compared with only 43% to young men.
  • Economists working with the National Bureau of Economic Research recently demonstrated how many young U.S. men are choosing to play video games rather than join the workforce.
  • The destructive forces of psychology deployed by the tech industry are making a greater impact on kids than the positive uses of psychology by mental health providers and child advocates. Put plainly, the science of psychology is hurting kids more than helping them.
  • Hope for this wired generation has seemed dim until recently, when a surprising group has come forward to criticize the tech industry’s use of psychological manipulation: tech executives
  • Tristan Harris, formerly a design ethicist at Google, has led the way by unmasking the industry’s use of persuasive design. Interviewed in The Economist’s 1843 magazine, he says, “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”
  • Marc Benioff, CEO of the cloud computing company Salesforce, is one of the voices calling for the regulation of social media companies because of their potential to addict children. He says that just as the cigarette industry has been regulated, so too should social media companies. “I think that, for sure, technology has addictive qualities that we have to address, and that product designers are working to make those products more addictive, and we need to rein that back as much as possible,”
  • “If there’s an unfair advantage or things that are out there that are not understood by parents, then the government’s got to come forward and illuminate that.”
  • Since millions of parents, for example the parents of my patient Kelly, have absolutely no idea that devices are used to hijack their children’s minds and lives, regulation of such practices is the right thing to do.
  • Another improbable group to speak out on behalf of children is tech investors.
  • How has the consumer tech industry responded to these calls for change? By going even lower.
  • Facebook recently launched Messenger Kids, a social media app that will reach kids as young as five years old. Suggestive that harmful persuasive design is now honing in on very young children is the declaration of Messenger Kids Art Director, Shiu Pei Luu, “We want to help foster communication [on Facebook] and make that the most exciting thing you want to be doing.”
  • the American Psychological Association (APA) — which is tasked with protecting children and families from harmful psychological practices — has been essentially silent on the matter
  • APA Ethical Standards require the profession to make efforts to correct the “misuse” of the work of psychologists, which would include the application of B.J. Fogg’s persuasive technologies to influence children against their best interests
  • Manipulating children for profit without their own or parents’ consent, and driving kids to spend more time on devices that contribute to emotional and academic problems is the embodiment of unethical psychological practice.
  • “Never before in history have basically 50 mostly men, mostly 20–35, mostly white engineer designer types within 50 miles of where we are right now [Silicon Valley], had control of what a billion people think and do.”
  • Some may argue that it’s the parents’ responsibility to protect their children from tech industry deception. However, parents have no idea of the powerful forces aligned against them, nor do they know how technologies are developed with drug-like effects to capture kids’ minds
  • Others will claim that nothing should be done because the intention behind persuasive design is to build better products, not manipulate kids
  • similar circumstances exist in the cigarette industry, as tobacco companies have as their intention profiting from the sale of their product, not hurting children. Nonetheless, because cigarettes and persuasive design predictably harm children, actions should be taken to protect kids from their effects.
  • in a 1998 academic paper, Fogg describes what should happen if things go wrong, saying, if persuasive technologies are “deemed harmful or questionable in some regard, a researcher should then either take social action or advocate that others do so.”
  • I suggest turning to President John F. Kennedy’s prescient guidance: He said that technology “has no conscience of its own. Whether it will become a force for good or ill depends on man.”
  • The APA should begin by demanding that the tech industry’s behavioral manipulation techniques be brought out of the shadows and exposed to the light of public awareness
  • Changes should be made in the APA’s Ethics Code to specifically prevent psychologists from manipulating children using digital machines, especially if such influence is known to pose risks to their well-being.
  • Moreover, the APA should follow its Ethical Standards by making strong efforts to correct the misuse of psychological persuasion by the tech industry and by user experience designers outside the field of psychology.
  • It should join with tech executives who are demanding that persuasive design in kids’ tech products be regulated
  • The APA also should make its powerful voice heard amongst the growing chorus calling out tech companies that intentionally exploit children’s vulnerabilities.
3More

Franklin Foer Has A Score To Settle With Facebook - The Forward - 0 views

  • he argues that we are pressed into conformity. By constantly interacting with these companies’ products, we have allowed them to intrude upon our inner lives, destroy contemplation and manipulate our behaviors.
  • I think it’s impossible to think metaphysically, impossible to think about the things that go beyond the world of appearance, if your attention is constantly being directed and if you’re constantly being distracted. So I think that contemplation is the necessary ingredient that makes a spiritual life possible.
  • privacy is something that everybody claims to want, but nobody articulates why. Privacy is beyond just having somebody get a peek through your window. The threat isn’t just that your space is being crowded and violated. What Brandeis was worried about was that idea that the fear of somebody looking over your shoulder as you think would start to affect your thought — that as soon as we know we have an audience, we start to bend our opinions to try to please our audience.
« First ‹ Previous 101 - 120 of 1711 Next › Last »
Showing 20 items per page