Skip to main content

Home/ TOK Friends/ Group items tagged intelligent life

Rss Feed Group items tagged

margogramiak

Low-income preschoolers exposed to nurturing care have with higher IQ scores later on -... - 0 views

  • Preschoolers living in impoverished communities who have access to a nurturing home environment have significantly higher intelligence quotient (IQ) scores in adolescence compared to those raised without nurturing care.
  • Preschoolers living in impoverished communities who have access to a nurturing home environment have significantly higher intelligence quotient (IQ) scores in adolescence compared to those raised without nurturing care.
    • margogramiak
       
      In class, we've talked about the effects of economic and emotional states growing up.
  • hey found that prenatal and early life adversities matter throughout life.
    • margogramiak
       
      Of course they do! How could they not? In Spanish, we learned about the "circle of poverty," which definitely applies here.
  • ...5 more annotations...
  • They also found that being raised in a nurturing environment could significantly counteract the detrimental effect of early adversities on IQ and help children achieve their full intellectual potential.
    • margogramiak
       
      I think "feeling loved" and "feeling like you are enough" is a hug contributor to success. If you are told you can do something, you are much more confident than if you are told you can't, obviously.
  • A nurturing environment also led to better growth and fewer psycho-social difficulties in adolescence, but it did not mitigate the effects of early adversities on growth and psycho-social difficulties."
    • margogramiak
       
      Interesting.
  • one in five children are raised in poverty and 15 percent do not complete high school, with higher rates for children in Black and Hispanic families.
    • margogramiak
       
      These are very impactful stats.
  • Parents want to provide nurturing environments and we need to help them." She said this includes interacting with young children in a positive way such as reading children's books from the library, singing songs together, and playing games with numbers and letters. Children who engage in age-appropriate chores with adult supervision like picking up toys and clearing the table gain skills and feel good about helping.
    • margogramiak
       
      This is up to the parents though, isn't it? How can the community solve the issue of lack of nurture in a household?
  • "This research highlights the importance of nurturing caregivers, both at home and at school to help children lead more productive lives as adults."
    • margogramiak
       
      It seems obvious nurturing has positive effects. I find it hard to believe that anyone who doesn't nurture their children who read this article and change the way they parent. I wish there was a way provided that allowed the community to help out, but I don't think this is a possibility.
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
johnsonel7

Brainless Creatures Can Do Some Incredibly Smart Things - 0 views

  • There's no denying that human intelligence makes our species stand out from other life on Earth. Our brains much more than our brawn account for our evolutionary successes—as well as our sometimes devastating impacts on the planet.
  • Depending on your definition, some basic hallmarks of intelligence—from decision-making to learning—can even be found in creatures that don't have brains at all.
  • These skills let slime molds do impressively well at some human tasks. When presented with bits of food arranged like the stops on Tokyo's rail system, branching slime molds come close to recreating the actual railway network. And in 2016, a team led by Macquarie University researcher Chris Reid showed that slime molds can solve the two-armed bandit problem—a decision-making test normally reserved for organisms with brains.
  • ...1 more annotation...
  • they're facing the direction of next morning's sunrise, behavior that requires the plant to anticipate the future. What's more, young shoots of corn can “remember” the directions of light sources.
Javier E

Campus Suicide and the Pressure of Perfection - The New York Times - 1 views

  • It also recognized a potentially life-threatening aspect of campus culture: Penn Face. An apothegm long used by students to describe the practice of acting happy and self-assured even when sad or stressed, Penn Face is so widely employed that it has showed up in skits performed during freshman orientation.
  • While the appellation is unique to Penn, the behavior is not. In 2003, Duke jolted academe with a report describing how its female students felt pressure to be “effortlessly perfect”: smart, accomplished, fit, beautiful and popular, all without visible effort. At Stanford, it’s called the Duck Syndrome. A duck appears to glide calmly across the water, while beneath the surface it frantically, relentlessly paddles.
  • Citing a “perception that one has to be perfect in every academic, cocurricular and social endeavor,” the task force report described how students feel enormous pressure that “can manifest as demoralization, alienation or conditions like anxiety or depression.”
  • ...14 more annotations...
  • While she says her parents are not overbearing, she relishes their praise for performing well. “Hearing my parents talk about me in a positive way, or hearing other parents talk about their kids doing well in academics or extracurriculars, that’s where I got some of the expectations for myself,” she said. “It was like self-fulfillment: I’d feel fulfilled and happy when other people were happy with what I’m doing, or expectations they have are met.”
  • Getting a B can cause some students to fall apart, she said. “What you and I would call disappointments in life, to them feel like big failures.”
  • a shift in how some young adults cope with challenges. “A small setback used to mean disappointment, or having that feeling of needing to try harder next time,” he said. Now? “For some students, a mistake has incredible meaning.”
  • The existential question “Why am I here?” is usually followed by the equally confounding “How am I doing?” In 1954, the social psychologist Leon Festinger put forward the social comparison theory, which posits that we try to determine our worth based on how we stack up against others.
  • In the era of social media, such comparisons take place on a screen with carefully curated depictions that don’t provide the full picture. Mobile devices escalate the comparisons from occasional to nearly constant.
  • When students remark during a counseling session that everyone else on campus looks happy, he tells them: “I walk around and think, ‘That one’s gone to the hospital. That person has an eating disorder. That student just went on antidepressants.’ As a therapist, I know that nobody is as happy or as grown-up as they seem on the outside.”
  • Madison Holleran’s suicide provided what might be the ultimate contrast between a shiny Instagram feed and interior darkness. Ms. Holleran posted images that show her smiling, dappled in sunshine or kicking back at a party. But according to her older sister, Ashley, Madison judged her social life as inferior to what she saw in the online posts of her high school friends
  • These cultural dynamics of perfectionism and overindulgence have now combined to create adolescents who are ultra-focused on success but don’t know how to fail.
  • Julie Lythcott-Haims watched the collision of these two social forces up close. In meetings with students, she would ask what she considered simple questions and they would become paralyzed, unable to express their desires and often discovering midconversation that they were on a path that they didn’t even like.
  • “They could say what they’d accomplished, but they couldn’t necessarily say who they were,”
  • She was also troubled by the growing number of parents who not only stayed in near-constant cellphone contact with their offspring but also showed up to help them enroll in classes, contacted professors and met with advisers (illustrating the progression from helicopter to lawn mower parents, who go beyond hovering to clear obstacles out of their child’s way). But what she found most disconcerting was that students, instead of being embarrassed, felt grateful. Penn researchers studying friendship have found that students’ best friends aren’t classmates or romantic partners, but parents.
  • Eventually she came to view her students’ lack of self-awareness, inability to make choices and difficulty coping with setbacks as a form of “existential impotence,” a direct result of a well-meaning but misguided approach to parenting that focuses too heavily on external measures of character.
  • “The Drama of the Gifted Child: The Search for the True Self.” In the book, published in 1979 and translated into 30 languages, Ms. Miller documents how some especially intelligent and sensitive children can become so attuned to parents’ expectations that they do whatever it takes to fulfill those expectations — at the expense of their own feelings and needs. This can lead to emotional emptiness and isolation
  • “In what is described as depression and experienced as emptiness, futility, fear of impoverishment, and loneliness,” she wrote, “can usually be recognized as the tragic loss of the self in childhood.”
Javier E

Young Minds in Critical Condition - NYTimes.com - 1 views

  • Our best college students are very good at being critical. In fact being smart, for many, means being critical. Having strong critical skills shows that you will not be easily fooled. It is a sign of sophistication, especially when coupled with an acknowledgment of one’s own “privilege.”
  • The combination of resistance to influence and deflection of responsibility by confessing to one’s advantages is a sure sign of one’s ability to negotiate the politics of learning on campus.
  • Taking things apart, or taking people down, can provide the satisfactions of cynicism. But this is thin gruel.
  • ...7 more annotations...
  • In overdeveloping the capacity to show how texts, institutions or people fail to accomplish what they set out to do, we may be depriving students of the chance to learn as much as possible from what they study.
  • As debunkers, they contribute to a cultural climate that has little tolerance for finding or making meaning — a culture whose intellectuals and cultural commentators get “liked” by showing that somebody else just can’t be believed.
  • Liberal education in America has long been characterized by the intertwining of two traditions: of critical inquiry in pursuit of truth and exuberant performance in pursuit of excellence. In the last half-century, though, emphasis on inquiry has become dominant, and it has often been reduced to the ability to expose error and undermine belief.
  • fetishizing disbelief as a sign of intelligence has contributed to depleting our cultural resources. Creative work, in whatever field, depends upon commitment, the energy of participation and the ability to become absorbed in works of literature, art and science. That type of absorption is becoming an endangered species of cultural life, as our nonstop, increasingly fractured technological existence wears down our receptive capacities.
  • Liberal learning depends on absorption in compelling work. It is a way to open ourselves to the various forms of life in which we might actively participate. When we learn to read or look or listen intensively, we are, at least temporarily, overcoming our own blindness by trying to understand an experience from another’s point of view.
  • we are learning to activate potential, and often to instigate new possibilities.
  • Liberal education must not limit itself to critical thinking and problem solving; it must also foster openness, participation and opportunity. It should be designed to take us beyond the campus to a life of ongoing, pragmatic learning that finds inspiration in unexpected sources, and increases our capacity to understand and contribute to the world
grayton downing

In Syria, Doctors Risk Life and Juggle Ethics - NYTimes.com - 1 views

  • Majid, who gave only his first name to protect his safety, collected hair and urine samples, clothing, tree leaves, soil and even a dead bird. He shared it with the Syrian American Medical Society, a humanitarian group that had been delivering such samples to American intelligence officials, as proof of possible chemical attacks.
  • United Nations inspectors have taken the first steps to destroy Syria’s chemical stockpile.
  • Many Syrian doctors have fled; those who remain describe dire conditions where even the most basic care is not available.
  • ...4 more annotations...
  • Mothers are desperate to have their children vaccinated; patients with chronic conditions like heart disease and diabetes struggle to get medicine; and there is “huge anxiety in the population,”
  • On Aug. 21, the group got word from some of its “silent partner” hospitals of a flood of patients with “neurotoxic symptoms” — roughly 3,600 in a period of three hours, including 355 who died.
  • The debate over whether doctors should expose human rights abuses has long been “one of these inside baseball arguments within the humanitarian community,” said Len Rubenstein, an expert on human rights and medical ethics at Johns Hopkins University. While Doctors Without Borders has a culture of “bearing witness,” he said, not all humanitarian organizations do.
  • The International Committee of the Red Cross, for instance, adheres to a strict code of political neutrality;
grayton downing

In Syria, Doctors Risk Life and Juggle Ethics - NYTimes.com - 1 views

  • “Doctors are notoriously poor evaluators of chemical warfare injuries if they have never seen them before,”
  • Those reports, he said, were later “cynically manipulated” by American intelligence officials to assert that Iran — not Iraq — was using cyanide. In fact, he continued, there is no evidence that either side was.
  • It’s not our role to collect samples for any government or investigative agency.
  • ...2 more annotations...
  • In an interview afterward, he said he was able to take samples from two people; the crush of patients was too overwhelming to do more. He worried about preserving them; with electricity working only sporadically, there was no constant refrigeration.
  • “with some degree of varying confidence,” Mr. Hagel said — that the Syrian government had used chemical weapons.
Javier E

I asked Tinder for my data. It sent me 800 pages of my deepest, darkest secrets | Techn... - 0 views

  • I emailed Tinder requesting my personal data and got back way more than I bargained for. Some 800 pages came back containing information such as my Facebook “likes”, my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened … the list goes on.
  • “You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”
  • What will happen if this treasure trove of data gets hacked, is made public or simply bought by another company? I can almost feel the shame I would experience. The thought that, before sending me these 800 pages, someone at Tinder might have read them already makes me cringe.
  • ...3 more annotations...
  • In May, an algorithm was used to scrape 40,000 profile images from the platform in order to build an AI to “genderise” faces. A few months earlier, 70,000 profiles from OkCupid (owned by Tinder’s parent company Match Group) were made public by a Danish researcher some commentators have labelled a “white supremacist”, who used the data to try to establish a link between intelligence and religious beliefs. The data is still out there.
  • The trouble is these 800 pages of my most intimate data are actually just the tip of the iceberg. “Your personal data affects who you see first on Tinder, yes,” says Dehaye. “But also what job offers you have access to on LinkedIn, how much you will pay for insuring your car, which ad you will see in the tube and if you can subscribe to a loan. “We are leaning towards a more and more opaque society, towards an even more intangible world where data collected about you will decide even larger facets of your life. Eventually, your whole existence will be affected.”
  • As a typical millennial constantly glued to my phone, my virtual life has fully merged with my real life. There is no difference any more. Tinder is how I meet people, so this is my reality. It is a reality that is constantly being shaped by others – but good luck trying to find out how.
anonymous

A Life Spent Focused on What Computers Are Doing to Us - The New York Times - 0 views

  • A Life Spent Focused on What Computers Are Doing to Us
  • We are, she fears, in danger of producing an emotionally sterile society more akin to that of the robots coming down the road.
  • Turkle was born in 1948 into a lower-middle-class family that raised her to assume she would ace every test she ever took and marry a nice Jewish boy with whom she would raise a brood of children to ensure the survival of the Jewish people.
  • ...15 more annotations...
  • er parents divorced when she was a toddler, and she was raised in a crowded Brooklyn apartment by her mother, her mother’s sister and her grandparents, all of whom unstintingly adored her
  • “Four loving adults had made me the center of their lives
  • Always the smartest kid in the room (she was a remarkable test-taker), Turkle flourished early as an intellectually confident person, easily winning a scholarship to Radcliffe, support for graduate school at Harvard
  • Newly graduated from Radcliffe, she was in Paris during the May 1968 uprising and was shocked by the responses of most French thinkers to what was happening in the streets
  • Each in turn, she observed, filtered the originality of the scene through his own theories.
  • Few saw these galvanizing events as the demonstration they so clearly were of a hungry demand for new relations between the individual and society.
  • The anecdotes that illustrate this marriage encapsulate, in an inspired way, the dilemma Turkle has spent her whole life exploring:
  • My interests were moving from ideas in the abstract to the impact of ideas on personal identity. How did new political ideas change how people saw themselves? And what made some ideas more appealing than others?”
  • For the people around her, it embodied “the science of getting computers to do things that would be considered intelligent if done by people.” Nothing more exciting. Who could resist such a possibility? Who would resist it? No one, it turned out.
  • “The worst thing, to Seymour,” she writes, would have been “to give children a computer that presented them only with games or opaque applications. … A learning opportunity would be missed because you would have masked the intellectual power of the machine. Sadly, this is what has happened.”
  • In a memoir written by a person of accomplishment, the interwoven account of childhood and early influences is valuable only insofar as it sheds light on the evolution of the individual into the author of the memoir we are reading.
  • with Turkle’s story of her marriage to Seymour Papert her personal adventures struck gold.
  • “good conversation” was valued “more highly than common courtesy. … To be interesting, Seymour did not have to be kind. He had to be brilliant.” And if you weren’t the sort of brilliant that he was, you were something less than real to him.
  • electrified
  • the rupture in understanding between someone devoted to the old-fashioned practice of humanist values and someone who doesn’t know what the word “human” really means.
Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
sissij

Elon Musk's New Company to Merge Human Brains with Machines | Big Think - 1 views

  • His new company Neuralink will work on linking human brains with computers, utilizing “neural lace” technology.
  • Musk talked recently about this kind of technology, seeing it as a way for human to interact with machines and superintelligencies.
  • What's next? We'll wait for the details. Elon Musk's influence on our modern life and aura certainly continue to grow, especially if he'll deliver on the promises of his various enterprises.
  •  
    My mom had a little research project on Tesla and she assigned me to do that so I know some strategies and ideas of Tesla, although not very deep. I found that Tesla and Elon Must had very innovative ideas on its product. Electrical car is the signature of Tesla. The design of the car and idea of being green is really friendly to the environment of Earth. Now, they are talking about new ideas of merging human intelligence with machine. --Sissi (4/2/2017)
Keiko E

Can a Computer Win on 'Jeopardy'? - WSJ.com - 0 views

  • Only three years earlier, the suggestion that a computer might match wits and word skills with human champions in "Jeopardy" sparked opposition bordering on ridicule in the halls of IBM Research.
  • The way Mr. Horn saw it, IBM had triumphed in 1997 with its chess challenge. The company's machine, Deep Blue, had defeated the reigning world champion, Garry Kasparov. This burnished IBM's reputation among the global computing elite while demonstrating to the world that computers could rival humans in certain domains associated with intelligence.
  • The next computer should charge into the vast expanse of human language and knowledge.
  • ...2 more annotations...
  • "Jeopardy," with its puns and strangely phrased clues, seemed too hard for a computer. IBM already had teams building machines to answer questions, and their performance, in speed and precision, came nowhere close to even a moderately informed human. How could the next machine grow so much smarter?
  • He was comfortable conversing about everything from the details of computational linguistics to the evolution of life on Earth and the nature of human thought. This made him an ideal ambassador for a "Jeopardy"-playing machine. After all, his project would raises all sorts of issues, and fears, about the role of brainy machines in society. Would they compete for jobs? Could they establish their own agendas, like the infamous computer, HAL, in "2001: A Space Odyssey," and take control? What was the future of knowledge and intelligence, and how would brains and machines divvy up the cognitive work?
Javier E

Opinion | A.I. Is Harder Than You Think - The New York Times - 1 views

  • The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data.
  • The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language. Just as you can make infinitely many arithmetic equations by combining a few mathematical symbols and following a small set of rules, you can make infinitely many sentences by combining a modest set of words and a modest set of rules.
  • A genuine, human-level A.I. will need to be able to cope with all of those possible sentences, not just a small fragment of them.
  • ...3 more annotations...
  • No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
  • Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs.
  • That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Javier E

Accelerationism: how a fringe philosophy predicted the future we live in | World news |... - 1 views

  • Roger Zelazny, published his third novel. In many ways, Lord of Light was of its time, shaggy with imported Hindu mythology and cosmic dialogue. Yet there were also glints of something more forward-looking and political.
  • accelerationism has gradually solidified from a fictional device into an actual intellectual movement: a new way of thinking about the contemporary world and its potential.
  • Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative.
  • ...31 more annotations...
  • Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled.
  • Accelerationism, therefore, goes against conservatism, traditional socialism, social democracy, environmentalism, protectionism, populism, nationalism, localism and all the other ideologies that have sought to moderate or reverse the already hugely disruptive, seemingly runaway pace of change in the modern world
  • Robin Mackay and Armen Avanessian in their introduction to #Accelerate: The Accelerationist Reader, a sometimes baffling, sometimes exhilarating book, published in 2014, which remains the only proper guide to the movement in existence.
  • “We all live in an operating system set up by the accelerating triad of war, capitalism and emergent AI,” says Steve Goodman, a British accelerationist
  • A century ago, the writers and artists of the Italian futurist movement fell in love with the machines of the industrial era and their apparent ability to invigorate society. Many futurists followed this fascination into war-mongering and fascism.
  • One of the central figures of accelerationism is the British philosopher Nick Land, who taught at Warwick University in the 1990s
  • Land has published prolifically on the internet, not always under his own name, about the supposed obsolescence of western democracy; he has also written approvingly about “human biodiversity” and “capitalistic human sorting” – the pseudoscientific idea, currently popular on the far right, that different races “naturally” fare differently in the modern world; and about the supposedly inevitable “disintegration of the human species” when artificial intelligence improves sufficiently.
  • In our politically febrile times, the impatient, intemperate, possibly revolutionary ideas of accelerationism feel relevant, or at least intriguing, as never before. Noys says: “Accelerationists always seem to have an answer. If capitalism is going fast, they say it needs to go faster. If capitalism hits a bump in the road, and slows down” – as it has since the 2008 financial crisis – “they say it needs to be kickstarted.”
  • On alt-right blogs, Land in particular has become a name to conjure with. Commenters have excitedly noted the connections between some of his ideas and the thinking of both the libertarian Silicon Valley billionaire Peter Thiel and Trump’s iconoclastic strategist Steve Bannon.
  • “In Silicon Valley,” says Fred Turner, a leading historian of America’s digital industries, “accelerationism is part of a whole movement which is saying, we don’t need [conventional] politics any more, we can get rid of ‘left’ and ‘right’, if we just get technology right. Accelerationism also fits with how electronic devices are marketed – the promise that, finally, they will help us leave the material world, all the mess of the physical, far behind.”
  • In 1972, the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari published Anti-Oedipus. It was a restless, sprawling, appealingly ambiguous book, which suggested that, rather than simply oppose capitalism, the left should acknowledge its ability to liberate as well as oppress people, and should seek to strengthen these anarchic tendencies, “to go still further … in the movement of the market … to ‘accelerate the process’”.
  • By the early 90s Land had distilled his reading, which included Deleuze and Guattari and Lyotard, into a set of ideas and a writing style that, to his students at least, were visionary and thrillingly dangerous. Land wrote in 1992 that capitalism had never been properly unleashed, but instead had always been held back by politics, “the last great sentimental indulgence of mankind”. He dismissed Europe as a sclerotic, increasingly marginal place, “the racial trash-can of Asia”. And he saw civilisation everywhere accelerating towards an apocalypse: “Disorder must increase... Any [human] organisation is ... a mere ... detour in the inexorable death-flow.”
  • With the internet becoming part of everyday life for the first time, and capitalism seemingly triumphant after the collapse of communism in 1989, a belief that the future would be almost entirely shaped by computers and globalisation – the accelerated “movement of the market” that Deleuze and Guattari had called for two decades earlier – spread across British and American academia and politics during the 90s. The Warwick accelerationists were in the vanguard.
  • In the US, confident, rainbow-coloured magazines such as Wired promoted what became known as “the Californian ideology”: the optimistic claim that human potential would be unlocked everywhere by digital technology. In Britain, this optimism influenced New Labour
  • At Warwick, however, the prophecies were darker. “One of our motives,” says Plant, “was precisely to undermine the cheery utopianism of the 90s, much of which seemed very conservative” – an old-fashioned male desire for salvation through gadgets, in her view.
  • The CCRU gang formed reading groups and set up conferences and journals. They squeezed into the narrow CCRU room in the philosophy department and gave each other impromptu seminars.
  • The main result of the CCRU’s frantic, promiscuous research was a conveyor belt of cryptic articles, crammed with invented terms, sometimes speculative to the point of being fiction.
  • The Warwick accelerationists saw themselves as participants, not traditional academic observers
  • K-punk was written by Mark Fisher, formerly of the CCRU. The blog retained some Warwick traits, such as quoting reverently from Deleuze and Guattari, but it gradually shed the CCRU’s aggressive rhetoric and pro-capitalist politics for a more forgiving, more left-leaning take on modernity. Fisher increasingly felt that capitalism was a disappointment to accelerationists, with its cautious, entrenched corporations and endless cycles of essentially the same products. But he was also impatient with the left, which he thought was ignoring new technology
  • lex Williams, co-wrote a Manifesto for an Accelerationist Politics. “Capitalism has begun to constrain the productive forces of technology,” they wrote. “[Our version of] accelerationism is the basic belief that these capacities can and should be let loose … repurposed towards common ends … towards an alternative modernity.”
  • What that “alternative modernity” might be was barely, but seductively, sketched out, with fleeting references to reduced working hours, to technology being used to reduce social conflict rather than exacerbate it, and to humanity moving “beyond the limitations of the earth and our own immediate bodily forms”. On politics and philosophy blogs from Britain to the US and Italy, the notion spread that Srnicek and Williams had founded a new political philosophy: “left accelerationism”.
  • Two years later, in 2015, they expanded the manifesto into a slightly more concrete book, Inventing the Future. It argued for an economy based as far as possible on automation, with the jobs, working hours and wages lost replaced by a universal basic income. The book attracted more attention than a speculative leftwing work had for years, with interest and praise from intellectually curious leftists
  • Even the thinking of the arch-accelerationist Nick Land, who is 55 now, may be slowing down. Since 2013, he has become a guru for the US-based far-right movement neoreaction, or NRx as it often calls itself. Neoreactionaries believe in the replacement of modern nation-states, democracy and government bureaucracies by authoritarian city states, which on neoreaction blogs sound as much like idealised medieval kingdoms as they do modern enclaves such as Singapore.
  • Land argues now that neoreaction, like Trump and Brexit, is something that accelerationists should support, in order to hasten the end of the status quo.
  • In 1970, the American writer Alvin Toffler, an exponent of accelerationism’s more playful intellectual cousin, futurology, published Future Shock, a book about the possibilities and dangers of new technology. Toffler predicted the imminent arrival of artificial intelligence, cryonics, cloning and robots working behind airline check-in desks
  • Land left Britain. He moved to Taiwan “early in the new millennium”, he told me, then to Shanghai “a couple of years later”. He still lives there now.
  • In a 2004 article for the Shanghai Star, an English-language paper, he described the modern Chinese fusion of Marxism and capitalism as “the greatest political engine of social and economic development the world has ever known”
  • Once he lived there, Land told me, he realised that “to a massive degree” China was already an accelerationist society: fixated by the future and changing at speed. Presented with the sweeping projects of the Chinese state, his previous, libertarian contempt for the capabilities of governments fell away
  • Without a dynamic capitalism to feed off, as Deleuze and Guattari had in the early 70s, and the Warwick philosophers had in the 90s, it may be that accelerationism just races up blind alleys. In his 2014 book about the movement, Malign Velocities, Benjamin Noys accuses it of offering “false” solutions to current technological and economic dilemmas. With accelerationism, he writes, a breakthrough to a better future is “always promised and always just out of reach”.
  • “The pace of change accelerates,” concluded a documentary version of the book, with a slightly hammy voiceover by Orson Welles. “We are living through one of the greatest revolutions in history – the birth of a new civilisation.”
  • Shortly afterwards, the 1973 oil crisis struck. World capitalism did not accelerate again for almost a decade. For much of the “new civilisation” Toffler promised, we are still waiting
johnsonel7

Remember This: Memory Requires Intelligent Design | Evolution News - 0 views

  • Evidence from a new study shows that memory circuits assist the mind in recalling thoughts by associating them with sensations. The mind can use the brain’s storage mechanisms to sort more important or urgent memories for faster recall. 
  • In particular, it remains unknown whether factors that structure the retrieval of external stimuli also apply to thought recall, and whether some thought features affect their accessibility in memory.
  • These observations appear to support the view that the conscious self uses memory as a tool. The mind uses the brain; it is not a passive illusion conjured up by the brain. The brain is active and ready, storing each sensation as we walk through daily life, but thoughts that involve planning are more readily recalled. It’s as if the self tells the brain, “Remember this,” and the brain obliges like a computer operator or librarian, pigeonholing the data where it can be recalled more easily later.
  • ...2 more annotations...
  • Many of these thoughts are likely forgotten, the authors say, probably because there is only so much a person can focus on in the sea of perceptions and stimuli going on around us. If the intuition is valuable enough to the self, the brain will assist future recall of the intuition
  • They also fit well with the idea that the greater accessibility of planning thoughts in memory results from an evolutionary process whereby mental contents that enhance survival chances are better remembered.
Javier E

The AI is eating itself - by Casey Newton - Platformer - 0 views

  • there also seems to be little doubt that is corroding the web.
  • , two new studies offered some cause for alarm. (I discovered both in the latest edition of Import AI, the indispensable weekly newsletter from Anthropic co-founder and former journalist Jack Clark.)
  • The first study, which had an admittedly small sample size, found that crowd-sourced workers on Amazon’s Mechanical Turks platforms increasingly admit to using LLMs to perform text-based tasks.
  • ...6 more annotations...
  • Until now, the assumption has been that they will answer truthfully based on their own experiences. In a post-ChatGPT world, though, academics can no longer make that assumption. Given the mostly anonymous, transactional nature of the assignment, it’s easy to imagine a worker signing up to participate in a large number of studies and outsource all their answers to a bot. This “raises serious concerns about the gradual dilution of the ‘human factor’ in crowdsourced text data,” the researchers write.
  • “This, if true, has big implications,” Clark writes. “It suggests the proverbial mines from which companies gather the supposed raw material of human insights are now instead being filled up with counterfeit human intelligence.”
  • A second, more worrisome study comes from researchers at the University of Oxford,  University of Cambridge, University of Toronto, and Imperial College London. It found that training AI systems on data generated by other AI systems — synthetic data, to use the industry’s term — causes models to degrade and ultimately collapse. While the decay can be managed by using synthetic data sparingly, researchers write, the idea that models can be “poisoned” by feeding them their own outputs raises real risks for the web
  • that’s a problem, because — to bring together the threads of today’s newsletter so far — AI output is spreading to encompass more of the web every day.“The obvious larger question,” Clark writes, “is what this does to competition among AI developers as the internet fills up with a greater percentage of generated versus real content.”
  • In The Verge, Vincent argues that the current wave of disruption will ultimately bring some benefits, even if it’s only to unsettle the monoliths that have dominated the web for so long. “Even if the web is flooded with AI junk, it could prove to be beneficial, spurring the development of better-funded platforms, he writes. “If Google consistently gives you garbage results in search, for example, you might be more inclined to pay for sources you trust and visit them directly.”
  • the glut of AI text will leave us with a web where the signal is ever harder to find in the noise. Early results suggest that these fears are justified — and that soon everyone on the internet, no matter their job, may soon find themselves having to exert ever more effort seeking signs of intelligent life.
peterconnelly

Google's I/O Conference Offers Modest Vision of the Future - The New York Times - 0 views

  • SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
  • The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
  • The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
  • ...2 more annotations...
  • At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
  • Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
Javier E

A Crush on God | Commonweal magazine - 0 views

  • Ignatius taught the Jesuits to end each day doing something called the Examen. You start by acknowledging that God is there with you; then you give thanks for the good parts of your day (mine usually include food); and finally, you run through the events of the day from morning to the moment you sat down to pray, stopping to consider when you felt consolation, the closeness of God, or desolation, when you ignored God or when you felt like God bailed on you. Then you ask for forgiveness for anything shitty you did, and for guidance tomorrow. I realize I’ve spent most of my life saying “thanks” to people in a perfunctory, whatever kind of way. Now when I say it I really mean it, even if it’s to the guy who makes those lattes I love getting in the morning, because I stopped and appreciated his latte-making skills the night before. If you are lucky and prone to belief, the Examen will also help you start really feeling God in your life.
  • My church hosts a monthly dinner for the homeless. Serious work is involved; volunteers pull multiple shifts shopping, prepping, cooking, serving food, and cleaning. I show up for the first time and am shuttled into the kitchen by a harried young woman with a pen stuck into her ponytail, who asks me if I can lift heavy weights before putting me in front of two bins of potato salad and handing me an ice cream scoop. For three hours, I scoop potato salad onto plates, heft vats of potato salad, and scrape leftover potato salad into the compost cans. I never want to eat potato salad again, but I learn something about the homeless people I’ve been avoiding for years: some are mentally a mess, many—judging from the smell—are drunk off their asses, but on the whole, they are polite, intelligent, and, more than anything else, grateful. As I walk back to my car, I’m stopped several times by many of them who want to thank me, saying how good the food was, how much they enjoyed it. “I didn’t do anything,” I say in return. “You were there,” one of them replies. It’s enough to make me go back the next month, and the month after that. And in between, when I see people I feed on the street, instead of focusing my eyes in the sidewalk and hoping they go away, we have conversations. It’s those conversations that move me from intellectual distance toward a greater sense of gratitude for the work of God.
Javier E

How the Internet Gets Inside Us : The New Yorker - 0 views

  • It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness.
  • ...14 more annotations...
  • In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.”
  • when people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix.
  • The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television (and Canadian television, at that) in 1965.
  • If all you have is a hammer, the saying goes, everything looks like a nail; and, if you think the world is broken, every machine looks like the hammer that broke it.
  • What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interactions with other people—are now easily internalized, made to feel like mere workings of the id left on its own.
  • Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers—as, for that matter, did the Hermione-like idea of “looking things up.” That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
  • Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began.
  • A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them.
  • And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.
  • Now television is the harmless little fireplace over in the corner, where the family gathers to watch “Entourage.” TV isn’t just docile; it’s positively benevolent. This makes you think that what made television so evil back when it was evil was not its essence but its omnipresence. Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user.
  • the Internet screen has always been like the palantír in Tolkien’s “Lord of the Rings”—the “seeing stone” that lets the wizards see the entire world. Its gift is great; the wizard can see it all. Its risk is real: evil things will register more vividly than the great mass of dull good. The peril isn’t that users lose their knowledge of the world. It’s that they can lose all sense of proportion. You can come to think that the armies of Mordor are not just vast and scary, which they are, but limitless and undefeatable, which they aren’t.
« First ‹ Previous 41 - 60 of 96 Next › Last »
Showing 20 items per page