Skip to main content

Home/ TOK Friends/ Group items tagged engineering

Rss Feed Group items tagged

Javier E

I Actually Read Woody Allen's Memoir - The Atlantic - 0 views

  • I’m a Woody Allen person, not because I disbelieve Dylan—in fact, I believe her. I’m a Woody Allen person because his movies helped shape me, and I can’t unsee them, the way I can’t un-read The Great Gatsby or un-hear “Gimme Shelter.” These are things that informed my sensibilities. All of them are part of me.
  • As to our opinion about his past, one thing is for sure: He couldn’t care less about it. “Rather than live on in the hearts and mind of the public,” he says in the final lines of the book, “I prefer to live on in my apartment.”Exit laughing.
  • the scene in Hannah and Her Sisters in which the Woody Allen character, distraught by his realization that there is no God and considering suicide, stumbles into a revival house to find the movie playing. He says in voice-over: The movie was a film that I’d seen many times in my life since I was a kid, and I always loved it. And I’m watching these people up on the screen and I started getting hooked on the film. And I started to feel,  How can you even think of killing yourself? I mean, isn’t it so stupid? I mean, look at all the people up there on the screen. They’re real funny—and what if the worst is true? What if there’s no God and you only go around once and that’s it? Well, you know, don’t you want to be part of the experience? I did.I do.
  • ...7 more annotations...
  • Can we still enjoy his work?  Of course we can, because the movies don’t really belong to Woody Allen any more than they do to you and me.
  • Some wrongs are so great that no legal or bureaucratic process can ever make things right. At some point, the only way to get unchained from a monster is through forgiveness. It would seem impossible for Geimer to be able to forgive Polanski, but she did—who understands how grace operates?—and has apparently been at peace ever since.
  • Woody Allen taught us that New York is the center of the world and L.A. is outer space. He installed himself in Elaine’s for a thousand dinners in the company of glittering figures of yesteryear (Norman Mailer! Liza Minnelli! Bill Bradley!) and made movie after movie and became one of the famous people the rest of the country associates most closely with the city.
  • He’s a 42-year-old guy with a 12th grade girlfriend, and if you want to understand the ’70s, maybe I have to tell you only that Vincent Canby’s review in The New York Times made no particular note of this fact other than to describe Tracy—played by Mariel Hemingway—as “a beautiful, 17-year-old nymphet with a turned-down mouth.” Or I could tell you that Manhattan was nominated for two Academy Awards and was widely loved by the in-crowd.
  • Soon enough he was writing, directing, and then acting in his own movies. He developed a particular method of moviemaking that was in some regards reminiscent of the studio system: he worked fast, almost never rehearsed, and rarely shot multiple takes or engineered complex shots: “As long as you’re dealing with comedy, particularly broad comedy, all you want is the scene should be lit, loud and fast.”
  • Allen is one of the great storytellers of his time, completely original, and any version of his life—including this one, in which we are obviously in the hands of an unreliable narrator, although no more so than in any of his autobiographical movies—can only be riveting. In a matter of phrases, he accomplishes things it takes a lesser writer several chapters to establish
  • We have the damn thing. What are we going to do with it? I suppose we could start—why not?—by actually reading it. And within just a phrase or two, you realize why people were afraid of it: Allen is a matchless comic writer and one whose voice is so well known by his aging fans that it’s as though the book is pouring into you through a special receiver dedicated just to him. Woody Allen does a great Woody Allen.
Javier E

Understanding the Social Networks | Talking Points Memo - 0 views

  • Even when people understand in some sense – and often even in detail – how the algorithms work they still tend to see these platforms as modern, digital versions of the town square. There have always been people saying nonsensical things, lying, unknowingly peddling inaccurate information. And our whole civic order is based on a deep skepticism about any authority’s ability to determine what’s true or accurate and what’s not. So really there’s nothing new under the sun, many people say.
  • But all of these points become moot when the networks – the virtual pubic square – are actually run by a series of computer programs designed to maximize ‘engagement’ and strong emotion for the purposes of selling advertising.
  • But really all these networks are running experiments that put us collectively into the role of Pavlov’s dogs.
  • ...6 more annotations...
  • The algorithms are showing you things to see what you react to and showing you more of the things that prompt an emotional response, that make it harder to leave Facebook or Instagram or any of the other social networks.
  • really if your goal is to maximize engagement that is of course what you’d do since anger is a far more compelling and powerful emotion than appreciation.
  • Facebook didn’t do that. That’s coded into our neurology. Facebook really is an extremism generating machine. It’s really an inevitable part of the core engine.
  • it’s not just Facebook. Or perhaps you could say it’s not even Facebook at all. It’s the mix of machine learning and the business models of all the social networks
  • They have real upsides. They connect us with people. Show us fun videos. But they are also inherently destructive. And somehow we have to take cognizance of that – and not just as a matter of the business decisions of one company.
  • the social networks – meaning the mix of machine learning and advertising/engagement based business models – are really something new under the sun. They’re addiction and extremism generating systems. It’s what they’re designed to do.
Javier E

Technopoly-Chs. 9,10--Scientism, the great symbol drain - 0 views

  • By Scientism, I mean three interrelated ideas that, taken together, stand as one of the pillars of Technopoly.
  • The first and indispensable idea is, as noted, that the methods of the natural sciences can be applied to the study of human behavior. This idea is the backbone of much of psychology and sociology as practiced at least in America, and largely accounts for the fact that social science, to quote F. A. Hayek, "has cont~ibuted scarcely anything to our understanding of social phenomena." 2
  • The second idea is, as also noted, that social science generates specific principles which can be used to organize society on a rational and humane basis. This implies that technical meansmostly "invisible technologies" supervised by experts-can be designed to control human behavior and set it on the proper course.
  • ...63 more annotations...
  • The third idea is that faith in science can serve as a comprehensive belief system that gives meaning to life, as well. as a sense of well-being, morality, and even immortality.
  • the spirit behind this scientific ideal inspired several men to believe that the reliable and predictable knowledge that could be obtained about stars and atoms could also be obtained about human behavior.
  • Among the best known of these early "social scientists" were Claude-Henri de Saint-Simon, Prosper Enfantin, and, of course, Auguste Comte.
  • They held in common two beliefs to which T echnopoly is deeply indebted: that the natural sciences provide a method to unlock the secrets of both the human heart and the direction of social life; that society can be rationally and humanely reorganized according to principles that social science will uncover. It is with these men that the idea of "social engineering" begins and the seeds of Scientism are planted.
  • Information produced by counting may sometimes be valuable in helping a person get an idea, or, even more so, in providing support for an idea. But the mere activity of counting does not make science.
  • Nor does observing th_ings, though it is sometimes said that if one is empirical, one is scientific. To be empirical means to look at things before drawing conclusions. Everyone, therefore, is an empiricist, with the possible exception of paranoid schizophrenics.
  • What we may call science, then, is the quest to find the immutable and universal laws that govern processes, presuming that there are cause-and-effect relations among these processes. It follows that the quest to understand human behavior and feeling can in no sense except the most trivial be called science.
  • Scientists do strive to be empirical and where possible precise, but it is also basic to their enterprise that they maintain a high degree of objectivity, which means that they study things independently of what people think or do about them.
  • I do not say, incidentally, that the Oedipus complex and God do not exist. Nor do I say that to believe in them is harmful-far from it. I say only that, there being no tests that could, in principle, show them to be false, they fall outside the purview Scientism 151 of science, as do almost all theories that make up the content of "social science."
  • in the nineteenth centu~, novelists provided us with most of the powerful metaphors and images of our culture.
  • This fact relieves the scientist of inquiring into their values and motivations and for this reason alone separates science from what is called social science, consigning the methodology of the latter (to quote Gunnar Myrdal) to the status of the "metaphysical and pseudo-objective." 3
  • The status of social-science methods is further reduced by the fact that there are almost no experiments that will reveal a social-science theory to be false.
  • et us further suppose that Milgram had found that 100 percent of his 1 subjecl:s did what they were told, with or without Hannah Arendt. And now let us suppose that I tell you a story of a Scientism 153 group of people who in some real situation refused to comply with the orders of a legitimate authority-let us say, the Danes who in the face of Nazi occupation helped nine thousand Jews escape to Sweden. Would you say to me that this cannot be so because Milgram' s study proves otherwise? Or would you say that this overturns Milgram's work? Perhaps you would say that the Danish response is not relevant, since the Danes did not regard the Nazi occupation as constituting legitimate autho!ity. But then, how would we explain the cooperative response to Nazi authority of the French, the Poles, and the Lithuanians? I think you would say none of these things, because Milgram' s experiment qoes not confirm or falsify any theory that might be said to postulate a law of human nature. His study-which, incidentally, I find both fascinating and terrifying-is not science. It is something else entirely.
  • Freud, could not imagine how the book could be judged exemplary: it was science or it was nothing. Well, of course, Freud was wrong. His work is exemplary-indeed, monumental-but scarcely anyone believes today that Freud was doing science, any more than educated people believe that Marx was doing science, or Max Weber or Lewis Mumford or Bruno Bettelheim or Carl Jung or Margaret Mead or Arnold Toynbee. What these people were doing-and Stanley Milgram was doing-is documenting the behavior and feelings of people as they confront problems posed by their culture.
  • the stories of social r~searchers are much closer in structure and purpose to what is called imaginative literature; that is to say, both a social researcher and a novelist give unique interpretations to a set of human events and support their interpretations with examples in various forms. Their interpretations cannot be proved or disproved but will draw their appeal from the power of their language, the depth of their explanations, the relevance of their examples, and the credibility of their themes.
  • And all of this has, in both cases, an identifiable moral purpose.
  • The words "true" and "false" do not apply here in the sense that they are used in mathematics or science. For there is nothing universally and irrevocably true or false about these interpretations. There are no critical tests to confirm or falsify them. There are no natural laws from which they are derived. They are bound by time, by situation, and above all by the cultural prejudices of the researcher or writer.
  • Both the novelist and the social researcher construct their stories by the use of archetypes and metaphors.
  • Cervantes, for example, gave us the enduring archetype of the incurable dreamer and idealist in Don Quixote. The social historian Marx gave us the archetype of the ruthless and conspiring, though nameless, capitalist. Flaubert gave us the repressed b~urgeois romantic in Emma Bovary. And Margaret Mead gave us the carefree, guiltless Samoan adolescent. Kafka gave us the alienated urbanite driven to self-loathing. And Max Weber gave us hardworking men driven by a mythology he called the Protestant Ethic. Dostoevsky gave us the egomaniac redeemed by love and religious fervor. And B. F. Skinner gave us the automaton redeemed by a benign technology.
  • Why do such social researchers tell their stories? Essentially for didactic and moralistic purposes. These men and women tell their stories for the same reason the Buddha, Confucius, Hillel, and Jesus told their stories (and for the same reason D. H. Lawrence told his).
  • Moreover, in their quest for objectivity, scientists proceed on the assumption that the objects they study are indifferent to the fact that they are being studied.
  • If, indeed, the price of civilization is repressed sexuality, it was not Sigmund Freud who discovered it. If the consciousness of people is formed by their material circumstances, it was not Marx who discovered it. If the medium is the message, it was not McLuhan who discovered it. They have merely retold ancient stories in a modem style.
  • Unlike science, social research never discovers anything. It only rediscovers what people once were told and need to be told again.
  • Only in knowing ~omething of the reasons why they advocated education can we make sense of the means they suggest. But to understand their reas.ons we must also understand the narratives that governed their view of the world. By narrative, I mean a story of human history that gives meaning to the past, explains the present, and provides guidance for the future.
  • In Technopoly, it is not Scientism 159 enough to say, it is immoral and degrading to allow people to be homeless. You cannot get anywhere by asking a judge, a politician, or a bureaucrat to r~ad Les Miserables or Nana or, indeed, the New Testament. Y 01.i must show that statistics have produced data revealing the homeless to be unhappy and to be a drain on the economy. Neither Dostoevsky nor Freud, Dickens nor Weber, Twain nor Marx, is now a dispenser of legitimate knowledge. They are interesting; they are ''.worth reading"; they are artifacts of our past. But as for "truth," we must tum to "science."
  • In Technopoly, it is not enough for social research to rediscover ancient truths or to comment on and criticize the moral behavior of people. In T echnopoly, it is an insult to call someone a "moralizer." Nor is it sufficient for social research to put forward metaphors, images, and ideas that can help people live with some measure of understanding and dignity.
  • Such a program lacks the aura of certain knowledge that only science can provide. It becomes necessary, then, to transform psychology, sociology, and anthropology into "sciences," in which humanity itself becomes an object, much like plants, planets, or ice cubes.
  • That is why the commonplaces that people fear death and that children who come from stable families valuing scholarship will do well in school must be announced as "discoveries" of scientific enterprise. In this way, social resear~hers can see themselves, and can be seen, as scientists, researchers without bias or values, unburdened by mere opinion. In this way, social policies can be claimed to rest on objectively determined facts.
  • given the psychological, social, and material benefits that attach to the label "scientist," it is not hard to see why social researchers should find it hard to give it up.
  • Our social "s'cientists" have from the beginning been less tender of conscience, or less rigorous in their views of science, or perhaps just more confused about the questions their procedures can answer and those they cannot. In any case, they have not been squeamish about imputing to their "discoveries" and the rigor of their procedures the power to direct us in how we ought rightly to behave.
  • It is less easy to see why the rest of us have so willingly, even eagerly, cooperated in perpetuating the same illusion.
  • When the new technologies and techniques and spirit of men like Galileo, Newton, and Bacon laid the foundations of natural science, they also discredited the authority of earlier accounts of the physical world, as found, for example, in the great tale of Genesis. By calling into question the truth of such accounts in one realm, science undermined the whole edifice of belief in sacred stories and ultimately swept away with it the source to which most humans had looked for moral authority. It is not too much to say, I think, that the desacralized world has been searching for an alternative source of moral authority ever since.
  • We welcome them gladly, and the claim explicitly made or implied, because we need so desperately to find some source outside the frail and shaky judgments of mortals like ourselves to authorize our moral decisions and behavior. And outside of the authority of brute force, which can scarcely be called moral, we seem to have little left but the authority of procedures.
  • It is not merely the misapplication of techniques such as quantification to questions where numbers have nothing to say; not merely the confusion of the material and social realms of human experience; not merely the claim of social researchers to be applying the aims and procedures of natural scien\:e to the human world.
  • This, then, is what I mean by Scientism.
  • It is the desperate hope, and wish, and ultimately the illusory belief that some standardized set of procedures called "science" can provide us with an unimpeachable source of moral authority, a suprahuman basis for answers to questions like "What is life, and when, and why?" "Why is death, and suffering?" 'What is right and wrong to do?" "What are good and evil ends?" "How ought we to think and feel and behave?
  • Science can tell us when a heart begins to beat, or movement begins, or what are the statistics on the survival of neonates of different gestational ages outside the womb. But science has no more authority than you do or I do to establish such criteria as the "true" definition of "life" or of human state or of personhood.
  • Social research can tell us how some people behave in the presence of what they believe to be legitimate authority. But it cannot tell us when authority is "legitimate" and when not, or how we must decide, or when it may be right or wrong to obey.
  • To ask of science, or expect of science, or accept unchallenged from science the answers to such questions is Scientism. And it is Technopoly's grand illusion.
  • In the institutional form it has taken in the United States, advertising is a symptom of a world-view 'that sees tradition as an obstacle to its claims. There can, of course, be no functioning sense of tradition without a measure of respect for symbols. Tradition is, in fact, nothing but the acknowledgment of the authority of symbols and the relevance of the narratives that gave birth to them. With the erosion of symbols there follows a loss of narrative, which is one of the most debilitating consequences of Technopoly' s power.
  • What the advertiser needs to know is not what is right about the product but what is wrong about the buyer. And so the balance of business expenditures shifts from product research to market research, which meahs orienting business away from making products of value and toward making consumers feel valuable. The business of business becomes pseudo-therapy; the consumer, a patient reassl.,lred by psychodramas.
  • At the moment, 1t 1s considered necessary to introduce computers to the classroom, as it once was thought necessary to bring closed-circuit television and film to the classroom. To the question "Why should we do this?" the answer is: "To make learning more efficient and more interesting." Such an answer is considered entirely adequate, since in T ~chnopoly efficiency and interest need no justification. It is, therefore, usually not noticed that this answer does not address the question "What is learning for?"
  • What this means is that somewhere near the core of Technopoly is a vast industry with license to use all available symbols to further the interests of commerce, by devouring the psyches of consumers.
  • In the twentieth century, such metaphors and images have come largely from the pens of social historians and researchers. ·Think of John Dewey, William James, Erik Erikson, Alfred Kinsey, Thorstein Veblen, Margaret Mead, Lewis Mumford, B. F. Skinner, Carl Rogers, Marshall McLuhan, Barbara Tuchman, Noam Chomsky, Robert Coles, even Stanley Milgram, and you must acknowledge that our ideas of what we are like and what kind of country we live in come from their stories to a far greater extent than from the stories of our most renowned novelists.
  • social idea that must be advanced through education.
  • Confucius advocated teaching "the Way" because in tradition he saw the best hope for social order. As our first systematic fascist, Plato wished education to produce philosopher kings. Cicero argued that education must free the student from the tyranny of the present. Jefferson thought the purpose of education is to teach the young how to protect their liberties. Rousseau wished education to free the young from the unnatural constraints of a wicked and arbitrary social order. And among John Dewey's aims was to help the student function without certainty in a world of constant change and puzzling· ambiguities.
  • The point is that cultures must have narratives and will find them where they will, even if they lead to catastrophe. The alternative is to live without meaning, the ultimate negation of life itself.
  • It is also to the point to say that each narrative is given its form and its emotional texture through a cluster of symbols that call for respect and allegiance, even devotion.
  • by definition, there can be no education philosophy that does not address what learning is for. Confucius, Plato, Quintilian, Cicero, Comenius, Erasmus, Locke, Rousseau, Jefferson, Russell, Montessori, Whitehead, and Dewey--each believed that there was some transcendent political, spiritual, or
  • The importance of the American Constitution is largely in its function as a symbol of the story of our origins. It is our political equivalent of Genesis. To mock it, to• ignore it, to circwnvent it is to declare the irrelevance of the story of the United States as a moral light unto the world. In like fashion, the Statue of Liberty is the key symbol of the story of America as the natural home of the teeming masses, from anywhere, yearning to be free.
  • There are those who believe--as did the great historian Arnold Toynbee-that without a comprehensive religious narrative at its center a culture must decline. Perhaps. There are, after all, other sources-mythology, politics, philosophy, and science; for example--but it is certain that no culture can flourish without narratives of transcendent orjgin and power.
  • This does not mean that the mere existence of such a narrative ensures a culture's stability and strength. There are destructive narratives. A narrative provides meaning, not necessarily survival-as, for example, the story provided by Adolf Hitler to the German nation in t:he 1930s.
  • What story does American education wish to tell now? In a growing Technopoly, what do we believe education is for?
  • The answers are discouraging, and one of. them can be inferred from any television commercial urging the young to stay in school. The commercial will either imply or state explicitly that education will help the persevering student to get a ·good job. And that's it. Well, not quite. There is also the idea that we educate ourselves to compete with the Japanese or the Germans in an economic struggle to be number one.
  • Young men, for example, will learn how to make lay-up shots when they play basketball. To be able to make them is part of the The Great Symbol Drain 177 definition of what good players are. But they do not play basketball for that purpose. There is usually a broader, deeper, and more meaningful reason for wanting to play-to assert their manhood, to please their fathers, to be acceptable to their peers, even for the sheer aesthetic pleasure of the game itself. What you have to do to be a success must be addressed only after you have found a reason to be successful.
  • Bloom's solution is that we go back to the basics of Western thought.
  • He wants us to teach our students what Plato, Aristotle, Cicero, Saint Augustine, and other luminaries have had to say on the great ethical and epistemological questions. He believes that by acquainting themselves with great books our students will acquire a moral and intellectual foundation that will give meaning and texture to their lives.
  • Hirsch's encyclopedic list is not a solution but a description of the problem of information glut. It is therefore essentially incoherent. But it also confuses a consequence of education with a purpose. Hirsch attempted to answer the question "What is an educated person?" He left unanswered the question "What is an education for?"
  • Those who reject Bloom's idea have offered several arguments against it. The first is that such a purpose for education is elitist: the mass of students would not find the great story of
  • Western civilization inspiring, are too deeply alienated from the past to find it so, and would therefore have difficulty connecting the "best that has been thought and said" to their own struggles to find q1eaning in their lives.
  • A second argument, coming from what is called a "leftist" perspective, is even more discouraging. In a sense, it offers a definition of what is meant by elitism. It asserts that the "story of Western civilization" is a partial, biased, and even oppressive one. It is not the story of blacks, American Indians, Hispanics, women, homosexuals-of any people who are not white heterosexual males of Judea-Christian heritage. This claim denies that there is or can be a national culture, a narrative of organizing power and inspiring symbols which all citizens can identify with and draw sustenance from. If this is true, it means nothing less than that our national symbols have been drained of their power to unite, and that education must become a tribal affair; that is, each subculture must find its own story and symbols, and use them as the moral basis of education.
  • nto this void comes the Technopoly story, with its emphasis on progress without limits, rights without responsibilities, and technology without cost. The T echnopoly story is without a moral center. It puts in its place efficiency, interest, and economic advance. It promises heaven on earth through the conveniences of technological progress. It casts aside all traditional narratives and symbols that· suggest stability and orderliness, and tells, instead, of a life of skills, technical expertise, and the ecstasy of consumption. Its purpose is to produce functionaries for an ongoing Technopoly.
  • It answers Bloom by saying that the story of Western civilization is irrelevant; it answers the political left by saying there is indeed a common culture whose name is T echnopoly and whose key symbol is now the computer, toward which there must be neither irreverence nor blasphemy. It even answers Hirsch by saying that there are items on his list that, if thought about too deeply and taken too seriously, will interfere with the progress of technology.
Javier E

TikTok Brain Explained: Why Some Kids Seem Hooked on Social Video Feeds - WSJ - 0 views

  • Remember the good old days when kids just watched YouTube all day? Now that they binge on 15-second TikToks, those YouTube clips seem like PBS documentaries.
  • Many parents tell me their kids can’t sit through feature-length films anymore because to them the movies feel painfully slow. Others have observed their kids struggling to focus on homework. And reading a book? Forget about it.
  • What is happening to kids’ brains?
  • ...27 more annotations...
  • “It is hard to look at increasing trends in media consumption of all types, media multitasking and rates of ADHD in young people and not conclude that there is a decrease in their attention span,
  • Emerging research suggests that watching short, fast-paced videos makes it harder for kids to sustain activities that don’t offer instant—and constant—gratification.
  • One of the few studies specifically examining TikTok-related effects on the brain focused on Douyin, the TikTok equivalent in China, made by the same Chinese parent company, ByteDance Ltd. It found that the personalized videos the app’s recommendation engine shows users activate the reward centers of the brain, as compared with the general-interest videos shown to new users.
  • Brain scans of Chinese college students showed that areas involved in addiction were highly activated in those who watched personalized videos.
  • It also found some people have trouble controlling when to stop watching.
  • attention. “If kids’ brains become accustomed to constant changes, the brain finds it difficult to adapt to a nondigital activity where things don’t move quite as fast,”
  • A TikTok spokeswoman said the company wants younger teens to develop positive digital habits early on, and that it recently made some changes aimed at curbing extensive app usage. For example, TikTok won’t allow users ages 13 to 15 to receive push notifications after 9 p.m. TikTok also periodically reminds users to take a break to go outside or grab a snack.
  • Kids have a hard time pulling away from videos on YouTube, too, and Google has made several changes to help limit its use, including turning off autoplay by default on accounts of people under 18.
  • When kids do things that require prolonged focus, such as reading or solving math problems, they’re using directed attention
  • This function starts in the prefrontal cortex, the part of the brain responsible for decision making and impulse control.
  • “Directed attention is the ability to inhibit distractions and sustain attention and to shift attention appropriately. It requires higher-order skills like planning and prioritizing,”
  • Kids generally have a harder time doing this—and putting down their videogame controllers—because the prefrontal cortex isn’t fully developed until age 25.
  • “We speculate that individuals with lower self-control ability have more difficulty shifting attention away from favorite video stimulation,
  • “In the short-form snackable world, you’re getting quick hit after quick hit, and as soon as it’s over, you have to make a choice,” said Mass General’s Dr. Marci, who wrote the new book “Rewired: Protecting Your Brain in the Digital Age.” The more developed the prefrontal cortex, the better the choices.
  • Dopamine is a neurotransmitter that gets released in the brain when it’s expecting a reward. A flood of dopamine reinforces cravings for something enjoyable, whether it’s a tasty meal, a drug or a funny TikTok video.
  • “TikTok is a dopamine machine,” said John Hutton, a pediatrician and director of the Reading & Literacy Discovery Center at Cincinnati Children’s Hospital. “If you want kids to pay attention, they need to practice paying attention.”
  • Researchers are just beginning to conduct long-term studies on digital media’s effects on kids’ brains. The National Institutes of Health is funding a study of nearly 12,000 adolescents as they grow into adulthood to examine the impact that many childhood experiences—from social media to smoking—have on cognitive development.
  • she predicts they will find that when brains repeatedly process rapid, rewarding content, their ability to process less-rapid, less-rewarding things “may change or be harmed.”
  • “It’s like we’ve made kids live in a candy store and then we tell them to ignore all that candy and eat a plate of vegetables,”
  • “We have an endless flow of immediate pleasures that’s unprecedented in human history.”
  • Parents and kids can take steps to boost attention, but it takes effort
  • Swap screen time for real time. Exercise and free play are among the best ways to build attention during childhood,
  • “Depriving kids of tech doesn’t work, but simultaneously reducing it and building up other things, like playing outside, does,”
  • Practice restraint.
  • “When you practice stopping, it strengthens those connections in the brain to allow you to stop again next time.”
  • Use tech’s own tools. TikTok has a screen-time management setting that allows users to cap their app usage.
  • Ensure good sleep. Teens are suffering from a sleep deficit.
peterconnelly

Opinion | Elon Musk's Tesla Management Is a Bad Sign for Twitter - The New York Times - 0 views

  • His promises to preserve free speech, ban spam bots and dramatically boost revenue may have earned the blessing of the company’s founder, Jack Dorsey, but with Twitter’s stock falling well below his offer price, Mr. Musk appears to be reneging on a deal that has made even Wall Street grow skeptical.
  • The way that he has managed and marketed his businesses from Tesla’s early days reveals a dysfunction behind the automaker’s veneer of technofuturism and past stock market successes.
  • The way that he has managed and marketed his businesses from Tesla’s early days reveals a dysfunction behind the automaker’s veneer of technofuturism and past stock market successes.
  • ...11 more annotations...
  • he forces his employees to bridge the enormous gap between technological reality and his dreams. This disconnect fosters a negligent and sometimes cruel workplace, to disastrous effect.
  • That fully self-driving announcement that so delighted his fans came as a far more jarring revelation to the project’s engineers, who found out about their staggering new mission when Mr. Musk tweeted about it.
  • This is the fundamental weakness of every organization run as a cult of personality: The dear leader can’t be everywhere or make every decision but often fails to provide the clear code of values that allows managers to independently shape their decisions around common goals.
  • Lawsuits by workers and California’s Department of Fair Employment and Housing allege that Black workers were tasked with menial physical labor in parts of the factory nicknamed “the plantation,” where they were subjected to racist slurs and graffiti.
  • He ultimately gave up and cobbled together a manual-labor-intensive production line in an open-air tent.
  • Female workers have sued, alleging a pervasive culture of sexual harassment and groping by supervisors. Mr. Musk was indifferent, emailing workers who experienced abuse that “it is important to be thick-skinned.”
  • lantatio
  • Mr. Musk’s reliance on hype is especially jarring.
  • By moving to buy Twitter, Mr. Musk has not only added another distraction to his long list but has also already shown the same drive to announce sweeping decisions in public.
  • Ultimately Mr. Musk’s goals for Twitter, as they are for Tesla, are not about making the right decisions for his companies or the people who make them possible.
  • They are about playing to the crowd and burnishing the legend that keeps fresh bodies and minds moving through the businesses that chew them up and spit them out.
  •  
    Elon Musk's management at Tesla and his buying of Twitter
peterconnelly

6 Podcasts About the Dark Side of the Internet - The New York Times - 0 views

  • Online life is no longer optional for most people. The pandemic only accelerated a shift already underway, turning the internet into our school, office and social lifeline.
  • the internet’s tightening grip on every aspect of life isn’t without costs
  • These six shows tap into some of those dangers, exploring cybercrime, cryptocurrency and the many flavors of horror that lurk on the dark web.
  • ...9 more annotations...
  • Recent episodes have focused on mainstream tech stories — the crypto crash, the Netflix bubble bursting — but others go down truly weird rabbit holes, like the mysterious world of Katie Couric CBD scams on Facebook.
  • Begun during the early days of quarantine in March 2020, this affable show feels like eavesdropping on a conversation between two internet-savvy friends
  • “One Click” explores the stories of other young DNP victims, whose deaths were all caused by a combination of predatory marketing, toxic diet culture and unregulated online pharmacies. It’s upsetting but vital listening.
  • Delving into the deepest recesses of the dark web, “Hunting Warhead” follows a monthslong investigation by Einar Stangvik, the hacker, and Hakon Hoydal, the journalist, that ultimately led to the downfall of a local politician.
  • In a bizarre twist, the hack turned out to be motivated by the impending release of a movie named “The Interview,” (starring Seth Rogen and James Franco), which depicted a fictional plot to assassinate Kim Jong-un of North Korea.
  • This wry, richly reported podcast from the BBC World Service chronicles every twist and turn of the saga and its implications far beyond Hollywood.
  • Ben Brock Johnson and Amory Sivertson, told stories inspired specifically by the quixotic virtual communities Reddit creates and the everyday mysteries it spotlights. (One classic episode focuses on a Reddit thread about a man who stumbled on a huge, inexplicable pile of plates in rural Pennsylvania.)
  • Cybercrime has snowballed so rapidly that the world has been caught off guard; last year’s ransomware attack on a major U.S. pipeline highlighted just how vulnerable many of our institutions are, not to mention our individual data.
  • The hosts, Dave Bittner and Joe Carrigan, are cybersecurity experts who emphasize solutions as they unfurl tales of social engineering, phishing scams and online con artists of every stripe.
criscimagnael

'I don't even remember what I read': People enter a 'dissociative state' when using soc... - 0 views

  • “I think people experience a lot of shame around social media use,” said lead author Amanda Baughan, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “One of the things I like about this framing of ‘dissociation’ rather than ‘addiction’ is that it changes the narrative. Instead of: ‘I should be able to have more self-control,’ it’s more like: ‘We all naturally dissociate in many ways throughout our day – whether it’s daydreaming or scrolling through Instagram, we stop paying attention to what’s happening around us.'”
  • “Having a stop built into a list meant that it was only going to be a few minutes of reading and then, if they wanted to really go crazy, they could read another list. But again, it’s only a few minutes. Having that bite-sized piece of content to consume was something that really resonated.”
  • Over the course of the month, 42% of participants (18 people) agreed or strongly agreed with that statement at least once. After the month, the researchers did in-depth interviews with 11 participants. Seven described experiencing dissociation while using Chirp.
  • ...3 more annotations...
  • “But people only realize that they’ve dissociated in hindsight. So once you exit dissociation there’s sometimes this feeling of: How did I get here? It’s like when people on social media realize: ‘Oh my gosh, how did 30 minutes go by? I just meant to check one notification.'”
  • The problem with social media platforms, the researchers said, is not that people lack the self-control needed to not get sucked in, but instead that the platforms themselves are not designed to maximize what people value.
  • These platforms need to create an end-of-use experience, so that people can have it fit in their day with their time-management goals.”
criscimagnael

Explained: Social media and the Texas shooter's messages | Explained News,The Indian Ex... - 0 views

  • Could technology companies have monitored ominous messages made by a gunman who Texas authorities say massacred 19 children and two teachers at an elementary school? Could they have warned the authorities? Answers to these questions remain unclear
  • But if nothing else, the shooting in Uvalde, Texas, seems highly likely to focus additional attention on how social platforms monitor what users are saying to and showing each other.
  • Shortly thereafter, Facebook stepped in to note that the gunman sent one-to-one direct messages, not public posts, and that they weren’t discovered until “after the terrible tragedy”.
  • ...7 more annotations...
  • Some reports appear to show that at least some of the gunman’s communications used Apple’s encrypted iPhone messaging services, which makes messages almost impossible for anyone else to read when sent to another iPhone user.
  • Facebook parent company Meta, which also owns Instagram, says it is working with law enforcement but declined to provide details.
  • A series of posts appeared on his Instagram in the days leading up to the shooting, including photos of a gun magazine in hand and two AR-style semi-automatic rifles. An Instagram user who was tagged in one post shared parts of what appears to be a chilling exchange on Instagram with Ramos, asking her to share his gun pictures with her more than 10,000 followers.
  • Meta has said it monitors people’s private messages for some kinds of harmful content, such as links to malware or images of child sexual exploitation. But copied images can be detected using unique identifiers — a kind of digital signature — which makes them relatively easy for computer systems to flag. Trying to interpret a string of threatening words — which can resemble a joke, satire or song lyrics — is a far more difficult task for artificial intelligence systems.
  • Facebook could, for instance, flag certain phrases such as “going to kill” or “going to shoot”, but without context — something AI in general has a lot of trouble with — there would be too many false positives for the company to analyze.
  • A recent Meta-commissioned report emphasized the benefits of such privacy but also noted some risks — including users who could abuse the encryption to sexually exploit children, facilitate human trafficking and spread hate speech.
  • Security experts say this could be done if Apple were to engineer a “backdoor” to allow access to messages sent by alleged criminals. Such a secret key would let them decipher encrypted information with a court order.
Javier E

You Have Permission to Be a Smartphone Skeptic - The Bulwark - 0 views

  • the brief return of one of my favorite discursive topics—are the kids all right?—in one of my least-favorite variations: why shouldn’t each of them have a smartphone and tablet?
  • One camp says yes, the kids are fine
  • complaints about screen time merely conceal a desire to punish hard-working parents for marginally benefiting from climbing luxury standards, provide examples of the moral panic occasioned by all new technologies, or mistakenly blame screens for ill effects caused by the general political situation.
  • ...38 more annotations...
  • No, says the other camp, led by Jonathan Haidt; the kids are not all right, their devices are partly to blame, and here are the studies showing why.
  • we should not wait for the replication crisis in the social sciences to resolve itself before we consider the question of whether the naysayers are on to something. And normal powers of observation and imagination should be sufficient to make us at least wary of smartphones.
  • These powerful instruments represent a technological advance on par with that of the power loom or the automobile
  • The achievement can be difficult to properly appreciate because instead of exerting power over physical processes and raw materials, they operate on social processes and the human psyche: They are designed to maximize attention, to make it as difficult as possible to look away.
  • they have transformed the qualitative experience of existing in the world. They give a person’s sociality the appearance and feeling of a theoretically endless open network, while in reality, algorithms quietly sort users into ideological, aesthetic, memetic cattle chutes of content.
  • Importantly, the process by which smartphones change us requires no agency or judgment on the part of a teen user, and yet that process is designed to provide what feels like a perfectly natural, inevitable, and complete experience of the world.
  • Smartphones offer a tactile portal to a novel digital environment, and this environment is not the kind of space you enter and leave
  • One reason commonly offered for maintaining our socio-technological status quo is that nothing really has changed with the advent of the internet, of Instagram, of Tiktok and Youtube and 4Chan
  • It is instead a complete shadow world of endless images; disembodied, manipulable personas; and the ever-present gaze of others. It lives in your pocket and in your mind.
  • The price you pay for its availability—and the engine of its functioning—is that you are always available to it, as well. Unless you have a strength of will that eludes most adults, its emissaries can find you at any hour and in any place to issue your summons to the grim pleasure palace.
  • the self-restraint and self-discipline required to use a smartphone well—that is, to treat it purely as an occasional tool rather than as a totalizing way of life—are unreasonable things to demand of teenagers
  • these are unreasonable things to demand of me, a fully adult woman
  • To enjoy the conveniences that a smartphone offers, I must struggle against the lure of the permanent scroll, the notification, the urge to fix my eyes on the circle of light and keep them fixed. I must resist the default pseudo-activity the smartphone always calls its user back to, if I want to have any hope of filling the moments of my day with the real activity I believe is actually valuable.
  • for a child or teen still learning the rudiments of self-control, still learning what is valuable and fulfilling, still learning how to prioritize what is good over the impulse of the moment, it is an absurd bar to be asked to clear
  • The expectation that children and adolescents will navigate new technologies with fully formed and muscular capacities for reason and responsibility often seems to go along with a larger abdication of responsibility on the part of the adults involved.
  • adults have frequently given in to a Faustian temptation: offering up their children’s generation to be used as guinea pigs in a mass longitudinal study in exchange for a bit more room to breathe in their own undeniably difficult roles as educators, caretakers, and parents.
  • It is not a particular activity that you start and stop and resume, and it is not a social scene that you might abandon when it suits you.
  • And this we must do without waiting for social science to hand us a comprehensive mandate it is fundamentally unable to provide; without cowering in panic over moral panics
  • The pre-internet advertising world was vicious, to be sure, but when the “pre-” came off, its vices were moved into a compound interest account. In the world of online advertising, at any moment, in any place, a user engaged in an infinite scroll might be presented with native content about how one Instagram model learned to accept her chunky (size 4) thighs, while in the next clip, another model relates how a local dermatologist saved her from becoming an unlovable crone at the age of 25
  • developing pathological interests and capacities used to take a lot more work than it does now
  • You had to seek it out, as you once had to seek out pornography and look someone in the eye while paying for it. You were not funneled into it by an omnipresent stream of algorithmically curated content—the ambience of digital life, so easily mistaken by the person experiencing it as fundamentally similar to the non-purposive ambience of the natural world.
  • And when interpersonal relations between teens become sour, nasty, or abusive, as they often do and always have, the unbalancing effects of transposing social life to the internet become quite clear
  • For both young men and young women, the pornographic scenario—dominance and degradation, exposure and monetization—creates an experiential framework for desires that they are barely experienced enough to understand.
  • This is not a world I want to live in. I think it hurts everyone; but I especially think it hurts those young enough to receive it as a natural state of affairs rather than as a profound innovation.
  • so I am baffled by the most routine objection to any blaming of smartphones for our society-wide implosion of teenagers’ mental health,
  • In short, and inevitably, today’s teenagers are suffering from capitalism—specifically “late capitalism,
  • what shocks me about this rhetorical approach is the rush to play defense for Apple and its peers, the impulse to wield the abstract concept of capitalism as a shield for actually existing, extremely powerful, demonstrably ruthless capitalist actors.
  • This motley alliance of left-coded theory about the evils of business and right-coded praxis in defense of a particular evil business can be explained, I think, by a deeper desire than overthrowing capitalism. It is the desire not to be a prude or hysteric of bumpkin
  • No one wants to come down on the side of tamping off pleasures and suppressing teen activity.
  • No one wants to be the shrill or leaden antagonist of a thousand beloved movies, inciting moral panics, scheming about how to stop the youths from dancing on Sunday.
  • But commercial pioneers are only just beginning to explore new frontiers in the profit-driven, smartphone-enabled weaponization of our own pleasures against us
  • To limit your moral imagination to the archetypes of the fun-loving rebel versus the stodgy enforcers in response to this emerging reality is to choose to navigate it with blinders on, to be a useful idiot for the robber barons of online life rather than a challenger to the corrupt order they maintain.
  • The very basic question that needs to be asked with every product rollout and implementation is what technologies enable a good human life?
  • this question is not, ultimately, the province of social scientists, notwithstanding how useful their work may be on the narrower questions involved. It is the free privilege, it is the heavy burden, for all of us, to think—to deliberate and make judgments about human good, about what kind of world we want to live in, and to take action according to that thought.
  • I am not sure how to build a world in which childrens and adolescents, at least, do not feel they need to live their whole lives online.
  • whatever particular solutions emerge from our negotiations with each other and our reckonings with the force of cultural momentum, they will remain unavailable until we give ourselves permission to set the terms of our common life.
  • But the environments in which humans find themselves vary significantly, and in ways that have equally significant downstream effects on the particular expression of human nature in that context.
  • most of all, without affording Apple, Facebook, Google, and their ilk the defensive allegiance we should reserve for each other.
Javier E

Silicon Valley's Safe Space - The New York Times - 0 views

  • The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
  • Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was “A.I. safety.”
  • The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
  • ...27 more annotations...
  • “The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teens a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
  • Some lived in group houses. Some practiced polyamory. “They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
  • For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
  • Yes, the community thought about A.I., she said, but it also thought about reducing the price of health care and slowing the spread of disease.
  • Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
  • That was not the Rationalist way.
  • “There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
  • Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.It was, he said, essential reading among “the people inventing the future” in the tech industry.
  • Mr. Altman, who had risen to prominence as the president of the start-up accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.The essay was a critique of what Mr. Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
  • But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
  • Mr. Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
  • Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
  • The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up,” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
  • The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
  • And in some ways, two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement.
  • In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel’s San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel’s firm, the two created an A.I. lab called DeepMind.
  • Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
  • In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
  • Mr. Aaronson, the University of Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
  • “It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said
  • In late June of last year, not long after talking to Mr. Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
  • The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
  • More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. “Putting his full name in The Times,” the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
  • I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the influential tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
  • I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
  • When I asked Mr. Altman if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
  • In August, Mr. Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

Netanyahu's Dark Worldview - The Atlantic - 0 views

  • as Netanyahu soon made clear, when it comes to AI, he believes that bad outcomes are the likely outcomes. The Israeli leader interrogated OpenAI’s Brockman about the impact of his company’s creations on the job market. By replacing more and more workers, Netanyahu argued, AI threatens to “cannibalize a lot more jobs than you create,” leaving many people adrift and unable to contribute to the economy. When Brockman suggested that AI could usher in a world where people would not have to work, Netanyahu countered that the benefits of the technology were unlikely to accrue to most people, because the data, computational power, and engineering talent required for AI are concentrated in a few countries.
  • “You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” the Israeli leader said, noting that even a free-market evangelist like himself was unsettled by such monopolization. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”
  • The other panelists did not. Brockman briefly pivoted to talk about OpenAI’s Israeli employees before saying, “The world we should shoot for is one where all the boats are rising.” But other than mentioning the possibility of a universal basic income for people living in an AI-saturated society, Brockman agreed that “creative solutions” to this problem were needed—without providing any.
  • ...10 more annotations...
  • The AI boosters emphasized the incredible potential of their innovation, and Netanyahu raised practical objections to their enthusiasm. They cited futurists such as Ray Kurzweil to paint a bright picture of a post-AI world; Netanyahu cited the Bible and the medieval Jewish philosopher Maimonides to caution against upending human institutions and subordinating our existence to machines.
  • Musk matter-of-factly explained that the “very positive scenario of AI” is “actually in a lot of ways a description of heaven,” where “you can have whatever you want, you don’t need to work, you have no obligations, any illness you have can be cured,” and death is “a choice.” Netanyahu incredulously retorted, “You want this world?”
  • By the time the panel began to wind down, the Israeli leader had seemingly made up his mind. “This is like having nuclear technology in the Stone Age,” he said. “The pace of development [is] outpacing what solutions we need to put in place to maximize the benefits and limit the risks.”
  • Netanyahu was a naysayer about the Arab Spring, unwilling to join the rapturous ranks of hopeful politicians, activists, and democracy advocates. But he was also right.
  • This was less because he is a prophet and more because he is a pessimist. When it comes to grandiose predictions about a better tomorrow—whether through peace with the Palestinians, a nuclear deal with Iran, or the advent of artificial intelligence—Netanyahu always bets against. Informed by a dark reading of Jewish history, he is a cynic about human nature and a skeptic of human progress.
  • fter all, no matter how far civilization has advanced, it has always found ways to persecute the powerless, most notably, in his mind, the Jews. For Netanyahu, the arc of history is long, and it bends toward whoever is bending it.
  • This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead
  • “The weak crumble, are slaughtered and are erased from history while the strong, for good or for ill, survive. The strong are respected, and alliances are made with the strong, and in the end peace is made with the strong.”
  • To his many critics, myself included, Netanyahu’s refusal to envision a different future makes him a “creature of the bunker,” perpetually governed by fear. Although his pessimism may sometimes be vindicated, it also holds his country hostag
  • In other words, the same cynicism that drives Netanyahu’s reactionary politics is the thing that makes him an astute interrogator of AI and its promoters. Just as he doesn’t trust others not to use their power to endanger Jews, he doesn’t trust AI companies or AI itself to police its rapidly growing capabilities.
Javier E

Don't Do TikTok - by Jonathan V. Last - The Triad - 0 views

  • The small-bore concern is personal data. TikTok is basically Chinese spyware. The platform is owned by a Chinese company, Bytedance, which, like all Chinese companies, operates at the pleasure of the Chinese Communist Party.1 Anyone from Bytedance who wants to look into an American user’s TikTok data can do so. And they do it on the reg.
  • But personal data isn’t the big danger. The big danger is that TikTok decides what videos people see. Recommendations are driven entirely by the company’s black-box algorithm. And since TikTok answers to the Chinese Communist Party, then if the ChiComs tell TikTok to start pushing certain videos to certain people, that’s what TikTok will do.
  • It’s a gigantic propaganda engine. Making TikTok your platform of choice is the equivalent of using RT as your primary news source.
  • ...7 more annotations...
  • TikTok accounts run by the propaganda arm of the Chinese government have accumulated millions of followers and tens of millions of views, many of them on videos editorializing about U.S. politics without clear disclosure that they were posted by a foreign government.
  • The accounts are managed by MediaLinks TV, a registered foreign agent and Washington D.C.-based outpost of the main Chinese Communist Party television news outlet, China Central Television. The largest of them are @Pandaorama, which features cute videos about Chinese culture, @The…Optimist, which posts about sustainability, and @NewsTokss, which features coverage of U.S. national and international news.
  • In the run-up to the 2022 elections, the @NewsTokss account criticized some candidates (mostly Republicans), and favored others (mostly Democrats). A video from July began with the caption “Cruz, Abbott Don’t Care About Us”; a video from October was captioned “Rubio Has Done Absolutely Nothing.” But @NewsTokss did not target only Republicans; another October video asked viewers whether they thought President Joe Biden’s promise to sign a bill codifying abortion rights was a “political manipulation tactic.” Nothing in these videos disclosed to viewers that they were being pushed by a foreign government.
  • any Chinese play for Taiwan would be accompanied by TikTok aggressively pushing content in America designed to divide public opinion and weaken America’s commitment to Taiwan’s defense.
  • With all the official GOP machinations against gay marriage, it seems like if McConnell wanted that bill to fail, he could have pressured two Republican senators to vote against it. He said nothing. Trump said nothing. DeSantis said nothing. There was barely a whimper of protest from those who could have influenced this. Mike Lee and Ted Cruz engaged in theatrics, but no one actually used their power to stop this.
  • They let it pass because they don’t care and they want it to go away as an issue. And that goes for the MAGA GOP as well. Opposition to it in politics is all theater and will have a shelf life in riling up the base.
  • Evangelical religious convictions might be for one man + one woman marriage. But, the civil/political situation is far different from that and it’s worth recognizing where the GOP actually stands. They could have stopped this. They didn’t. That point should be clear, especially to their evangelical base who looks to the GOP to save America for them.
Javier E

An Unholy Alliance Between Ye, Musk, and Trump - The Atlantic - 0 views

  • Musk, Trump, and Ye are after something different: They are all obsessed with setting the rules of public spaces.
  • An understandable consensus began to form on the political left that large social networks, but especially Facebook, helped Trump rise to power. The reasons were multifaceted: algorithms that gave a natural advantage to the most shameless users, helpful marketing tools that the campaign made good use of, a confusing tangle of foreign interference (the efficacy of which has always been tough to suss out), and a basic attentional architecture that helps polarize and pit Americans against one another (no foreign help required).
  • The misinformation industrial complex—a loosely knit network of researchers, academics, journalists, and even government entities—coalesced around this moment. Different phases of the backlash homed in on bots, content moderation, and, after the Cambridge Analytica scandal, data privacy
  • ...15 more annotations...
  • the broad theme was clear: Social-media platforms are the main communication tools of the 21st century, and they matter.
  • With Trump at the center, the techlash morphed into a culture war with a clear partisan split. One could frame the position from the left as: We do not want these platforms to give a natural advantage to the most shameless and awful people who stoke resentment and fear to gain power
  • On the right, it might sound more like: We must preserve the power of the platforms to let outsiders have a natural advantage (by stoking fear and resentment to gain power).
  • the political world realized that platforms and content-recommendation engines decide which cultural objects get amplified. The left found this troubling, whereas the right found it to be an exciting prospect and something to leverage, exploit, and manipulate via the courts
  • Crucially, both camps resent the power of the technology platforms and believe the companies have a negative influence on our discourse and politics by either censoring too much or not doing enough to protect users and our political discourse.
  • one outcome of the techlash has been an incredibly facile public understanding of content moderation and a whole lot of culture warring.
  • Musk and Ye aren’t so much buying into the right’s overly simplistic Big Tech culture war as they are hijacking it for their own purposes; Trump, meanwhile, is mostly just mad
  • Each one casts himself as an antidote to a heavy-handed, censorious social-media apparatus that is either captured by progressive ideology or merely pressured into submission by it. But none of them has any understanding of thorny First Amendment or content-moderation issues.
  • They embrace a shallow posture of free-speech maximalism—the very kind that some social-media-platform founders first espoused, before watching their sites become overrun with harassment, spam, and other hateful garbage that drives away both users and advertisers
  • for those who can hit the mark without getting banned, social media is a force multiplier for cultural and political relevance and a way around gatekeeping media.
  • Musk, Ye, and Trump rely on their ability to pick up their phones, go direct, and say whatever they wan
  • the moment they butt up against rules or consequences, they begin to howl about persecution and unfair treatment. The idea of being treated similarly to the rest of a platform’s user base
  • is so galling to these men that they declare the entire system to be broken.
  • they also demonstrate how being the Main Character of popular and political culture can totally warp perspective. They’re so blinded by their own outlying experiences across social media that, in most cases, they hardly know what it is they’re buying
  • These are projects motivated entirely by grievance and conflict. And so they are destined to amplify grievance and conflict
Javier E

Opinion | Here's Hoping Elon Musk Destroys Twitter - The New York Times - 0 views

  • I’ve sometimes described being on Twitter as like staying too late at a bad party full of people who hate you. I now think this was too generous to Twitter. I mean, even the worst parties end.
  • Twitter is more like an existentialist parable of a party, with disembodied souls trying and failing to be properly seen, forever. It’s not surprising that the platform’s most prolific users often refer to it as “this hellsite.”
  • Among other things, he’s promised to reinstate Donald Trump, whose account was suspended after the Jan. 6 attack on the Capitol. Other far-right figures may not be far behind, along with Russian propagandists, Covid deniers and the like. Given Twitter’s outsize influence on media and politics, this will probably make American public life even more fractious and deranged.
  • ...12 more annotations...
  • The best thing it could do for society would be to implode.
  • Twitter hooks people in much the same way slot machines do, with what experts call an “intermittent reinforcement schedule.” Most of the time, it’s repetitive and uninteresting, but occasionally, at random intervals, some compelling nugget will appear. Unpredictable rewards, as the behavioral psychologist B.F. Skinner found with his research on rats and pigeons, are particularly good at generating compulsive behavior.
  • “I don’t know that Twitter engineers ever sat around and said, ‘We are creating a Skinner box,’” said Natasha Dow Schüll, a cultural anthropologist at New York University and author of a book about gambling machine design. But that, she said, is essentially what they’ve built. It’s one reason people who should know better regularly self-destruct on the site — they can’t stay away.
  • Twitter is not, obviously, the only social media platform with addictive qualities. But with its constant promise of breaking news, it feeds the hunger of people who work in journalism and politics, giving it a disproportionate, and largely negative, impact on those fields, and hence on our national life.
  • Twitter is much better at stoking tribalism than promoting progress.
  • According to a 2021 study, content expressing “out-group animosity” — negative feelings toward disfavored groups — is a major driver of social-media engagement
  • That builds on earlier research showing that on Twitter, false information, especially about politics, spreads “significantly farther, faster, deeper and more broadly than the truth.”
  • The company’s internal research has shown that Twitter’s algorithm amplifies right-wing accounts and news sources over left-wing ones.
  • This dynamic will probably intensify quite a bit if Musk takes over. Musk has said that Twitter has “a strong left bias,” and that he wants to undo permanent bans, except for spam accounts and those that explicitly call for violence. That suggests figures like Alex Jones, Steve Bannon and Marjorie Taylor Greene will be welcomed back.
  • But as one of the people who texted Musk pointed out, returning banned right-wingers to Twitter will be a “delicate game.” After all, the reason Twitter introduced stricter moderation in the first place was that its toxicity was bad for business
  • For A-list entertainers, The Washington Post reports, Twitter “is viewed as a high-risk, low-reward platform.” Plenty of non-celebrities feel the same way; I can’t count the number of interesting people who were once active on the site but aren’t anymore.
  • An influx of Trumpists is not going to improve the vibe. Twitter can’t be saved. Maybe, if we’re lucky, it can be destroyed.
Javier E

Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations - The New York Times - 0 views

  • “I did not comprehend that ChatGPT could fabricate cases,” he told Judge Castel.
  • At times during the hearing, Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him.
  • For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT.
  • ...9 more annotations...
  • “I continued to be duped by ChatGPT. It’s embarrassing,” Mr. Schwartz said.
  • As Mr. Schwartz answered the judge’s questions, the reaction in the courtroom, crammed with close to 70 people who included lawyers, law students, law clerks and professors, rippled across the benches. There were gasps, giggles and sighs. Spectators grimaced, darted their eyes around, chewed on pens.
  • “This case has reverberated throughout the entire legal profession,” said David Lat, a legal commentator. “It is a little bit like looking at a car wreck.”
  • The episode, which arose in an otherwise obscure lawsuit, has riveted the tech world, where there has been a growing debate about the dangers — even an existential threat to humanity — posed by artificial intelligence. It has also transfixed lawyers and judges.
  • Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Mata’s lawyers responded with a 10-page brief citing more than half a dozen court decisions, with names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, in support of their argument that the suit should be allowed to proceed.After Avianca’s lawyers could not locate the cases, Judge Castel ordered Mr. Mata’s lawyers to provide copies. They submitted a compendium of decisions.It turned out the cases were not real.
  • Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally.He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases.“I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said.
  • Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”
  • “This case has changed the urgency of it,” Professor Roiphe said. “There’s a sense that this is not something that we can mull over in an academic way. It’s something that has affected us right now and has to be addressed.”
  • In the declaration Mr. Schwartz filed this week, he described how he had posed questions to ChatGPT, and each time it seemed to help with genuine case citations. He attached a printout of his colloquy with the bot, which shows it tossing out words like “sure” and “certainly!”After one response, ChatGPT said cheerily, “I hope that helps!”
Javier E

Yuval Noah Harari paints a grim picture of the AI age, roots for safety checks | Techno... - 0 views

  • Yuval Noah Harari, known for the acclaimed non-fiction book Sapiens: A Brief History of Mankind, in his latest article in The Economist, has said that artificial intelligence has “hacked” the operating system of human civilization
  • he said that the newly emerged AI tools in recent years could threaten the survival of human civilization from an “unexpected direction.”
  • He demonstrated how AI could impact culture by talking about language, which is integral to human culture. “Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artifacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artifacts we created by inventing myths and writing scriptures,” wrote Harari.
  • ...8 more annotations...
  • He stated that democracy is also a language that dwells on meaningful conversations, and when AI hacks language it could also destroy democracy.
  • The 47-year-old wrote that the biggest challenge of the AI age was not the creation of intelligent tools but striking a collaboration between humans and machines.
  • To highlight the extent of how AI-driven misinformation can change the course of events, Harari touched upon the cult QAnon, a political movement affiliated with the far-right in the US. QAnon disseminated misinformation via “Q drops” that were seen as sacred by followers.
  • Harari also shed light on how AI could form intimate relationships with people and influence their decisions. “Through its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews,” he wrote. To demonstrate this, he cited the example of Blake Lemoine, a Google engineer who lost his job after publicly claiming that the AI chatbot LaMDA had become sentient. According to the historian, the controversial claim cost Lemoine his job. He asked if AI can influence people to risk their jobs, what else could it induce them to do?
  • Harari also said that intimacy was an effective weapon in the political battle of minds and hearts. He said that in the past few years, social media has become a battleground for controlling human attention, and the new generation of AI can convince people to vote for a particular politician or buy a certain product.
  • In his bid to call attention to the need to regulate AI technology, Harari said that the first regulation should be to make it mandatory for AI to disclose that it is an AI. He said it was important to put a halt on ‘irresponsible deployment’ of AI tools in the public domain, and regulating it before it regulates us.
  • The author also shed light on the fact that how the current social and political systems are incapable of dealing with the challenges posed by AI. Harari emphasised the need to have an ethical framework to respond to challenges posed by AI.
  • He argued that while GPT-3 had made remarkable progress, it was far from replacing human interactions
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

Elon Musk Is Not Playing Four-Dimensional Chess - 0 views

  • Musk is not wrong that Twitter is chock-full of noise and garbage, but the most pernicious stuff comes from real people and a media ecosystem that amplifies and rewards incendiary bullshit
  • This dynamic is far more of a problem for Twitter (but also the news media and the internet in general) than shadowy bot farms are. But it’s also a dilemma without much of a concrete solution
  • Were Musk actually curious or concerned with the health of the online public discourse, he might care about the ways that social media platforms like Twitter incentivize this behavior and create an information economy where our sense of proportion on a topic can be so easily warped. But Musk isn’t interested in this stuff, in part because he is a huge beneficiary of our broken information environment and can use it to his advantage to remain constantly in the spotlight.
  • ...3 more annotations...
  • Musk’s concern with bots isn’t only a bullshit tactic he’s using to snake out of a bad business deal and/or get a better price for Twitter; it’s also a great example of his shallow thinking. The man has at least some ability to oversee complex engineering systems that land rockets, but his narcissism affords him a two-dimensional understanding of the way information travels across social media.
  • He is drawn to the conspiratorial nature of bots and information manipulation, because it is a more exciting and easier-to-understand solution to more complex or uncomfortable problems. Instead of facing the reality that many people dislike him as a result of his personality, behavior, politics, or shitty management style, he blames bots. Rather than try to understand the gnarly mechanics and hard-to-solve problems of democratized speech, he sorts them into overly simplified boxes like censorship and spam and then casts himself as the crusading hero who can fix it all. But he can’t and won’t, because he doesn’t care enough to find the answers.
  • Musk isn’t playing chess or even checkers. He’s just the richest man in the world, bored, mad, and posting like your great-uncle.
Javier E

Among the Disrupted - The New York Times - 0 views

  • even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science.
  • The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university,
  • So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods
  • ...27 more annotations...
  • The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.
  • Greif’s book is a prehistory of our predicament, of our own “crisis of man.” (The “man” is archaic, the “crisis” is not.) It recognizes that the intellectual history of modernity may be written in part as the epic tale of a series of rebellions against humanism
  • We are not becoming transhumanists, obviously. We are too singular for the Singularity. But are we becoming posthumanists?
  • In American culture right now, as I say, the worldview that is ascendant may be described as posthumanism.
  • The posthumanism of the 1970s and 1980s was more insular, an academic affair of “theory,” an insurgency of professors; our posthumanism is a way of life, a social fate.
  • In “The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973,” the gifted essayist Mark Greif, who reveals himself to be also a skillful historian of ideas, charts the history of the 20th-century reckonings with the definition of “man.
  • Here is his conclusion: “Anytime your inquiries lead you to say, ‘At this moment we must ask and decide who we fundamentally are, our solution and salvation must lie in a new picture of ourselves and humanity, this is our profound responsibility and a new opportunity’ — just stop.” Greif seems not to realize that his own book is a lasting monument to precisely such inquiry, and to its grandeur
  • “Answer, rather, the practical matters,” he counsels, in accordance with the current pragmatist orthodoxy. “Find the immediate actions necessary to achieve an aim.” But before an aim is achieved, should it not be justified? And the activity of justification may require a “picture of ourselves.” Don’t just stop. Think harder. Get it right.
  • — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
  • Who has not felt superior to humanism? It is the cheapest target of all: Humanism is sentimental, flabby, bourgeois, hypocritical, complacent, middlebrow, liberal, sanctimonious, constricting and often an alibi for power
  • what is humanism? For a start, humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating
  • The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality
  • Here is a humanist proposition for the age of Google: The processing of information is not the highest aim to which the human spirit can aspire, and neither is competitiveness in a global economy. The character of our society cannot be determined by engineers.
  • And posthumanism? It elects to understand the world in terms of impersonal forces and structures, and to deny the importance, and even the legitimacy, of human agency.
  • There have been humane posthumanists and there have been inhumane humanists. But the inhumanity of humanists may be refuted on the basis of their own worldview
  • the condemnation of cruelty toward “man the machine,” to borrow the old but enduring notion of an 18th-century French materialist, requires the importation of another framework of judgment. The same is true about universalism, which every critic of humanism has arraigned for its failure to live up to the promise of a perfect inclusiveness
  • there has never been a universalism that did not exclude. Yet the same is plainly the case about every particularism, which is nothing but a doctrine of exclusion; and the correction of particularism, the extension of its concept and its care, cannot be accomplished in its own name. It requires an idea from outside, an idea external to itself, a universalistic idea, a humanistic idea.
  • Asking universalism to keep faith with its own principles is a perennial activity of moral life. Asking particularism to keep faith with its own principles is asking for trouble.
  • there is no more urgent task for American intellectuals and writers than to think critically about the salience, even the tyranny, of technology in individual and collective life
  • a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion
  • “Our very mastery seems to escape our mastery,” Michel Serres has anxiously remarked. “How can we dominate our domination; how can we master our own mastery?”
  • universal accessibility is not the end of the story, it is the beginning. The humanistic methods that were practiced before digitalization will be even more urgent after digitalization, because we will need help in navigating the unprecedented welter
  • Searches for keywords will not provide contexts for keywords. Patterns that are revealed by searches will not identify their own causes and reasons
  • The new order will not relieve us of the old burdens, and the old pleasures, of erudition and interpretation.
  • Is all this — is humanism — sentimental? But sentimentality is not always a counterfeit emotion. Sometimes sentiment is warranted by reality.
  • The persistence of humanism through the centuries, in the face of formidable intellectual and social obstacles, has been owed to the truth of its representations of our complexly beating hearts, and to the guidance that it has offered, in its variegated and conflicting versions, for a soulful and sensitive existence
  • a complacent humanist is a humanist who has not read his books closely, since they teach disquiet and difficulty. In a society rife with theories and practices that flatten and shrink and chill the human subject, the humanist is the dissenter.
« First ‹ Previous 201 - 220 of 233 Next ›
Showing 20 items per page