Skip to main content

Home/ TOK Friends/ Group items tagged future of technology

Rss Feed Group items tagged

knudsenlu

Quinn Norton: The New York Times Fired My Doppelgänger - The Atlantic - 0 views

  • Quinn Norton
  • The day before Valentine’s Day, social media created a bizarro-world version of me. I have seen strange ideas about me online before, but this doppelgänger was so far from resembling me that I told friends and loved ones I didn’t want to even try to rebut it. It was a leading question turned into a human form. The net created a person with my name and face, but with so little relationship to me, she could have been an invader from an alternate universe.
  • It started when The New York Times hired me for its editorial board. In January, the Times sought me out because, editorial leaders told me, the Times as an institution is struggling with understanding how technology is shifting society and politics. We talked for a while. I discussed my work, my beliefs, and my background.
  • ...9 more annotations...
  • I was hesitant with the Times. They were far out of my comfort zone, but I felt that the people I was talking to had a sincerity greater than their confusion. Nothing that has happened since then has dissuaded me from that impression.
  • If you’re reading this, especially on the internet, you are the teacher for those institutions at a local, national, and global level. I understand that you didn’t ask for this position. Neither did I. History doesn’t ask you if you want to be born in a time of upheaval, it just tells you when you are. When the backlash began, I got the call from the person who had sought me out and recruited me. The fear I heard in that shaky voice coming through my mobile phone was unmistakable. It was the fear of a mob, of the unknown, and of the idea that maybe they had gotten it wrong and done something terrible. I have felt all of those things. Many of us have. It’s not a place of strength, even when it seems to be coming from someone standing in a place of power. The Times didn’t know what the internet was doing—tearing down a new hire, exposing a fraud, threatening them—everything seemed to be in the mix.
  • I had even written about context collapse myself, but that hadn’t saved me from falling into it, and then hurting other people I didn’t mean to hurt. This particular collapse didn’t create much of a doppelgänger, but it did find me spending a morning as a defensive jerk. I’m very sorry for that dumb mistake. It helped me learn a lesson: Be damn sure when you make angry statements. Check them out long enough that, even if the statements themselves are still angry, you are not angry by the time you make them. Again and again, I have learned this: Don’t internet angry. If you’re angry, internet later.
  • I think if I’d gotten to write for the Times as part of their editorial board, this might have been different. I might have been in a position to show how our media doppelgängers get invented, and how we can unwind them. It takes time and patience. It doesn’t come from denying the doppelgänger—there’s nothing there to deny. I was accused of homophobia because of the in-group language I used with anons when I worked with them. (“Anons” refers to people who identify as part of the activist collective Anonymous.) I was accused of racism for use of taboo language, mainly in a nine-year-old retweet in support of Obama. Intentions aside, it wasn’t a great tweet, and I was probably overemotional when I retweeted it.
  • In late 2015 I woke up a little before 6 a.m., jet-lagged in New York, and started looking at Twitter. There was a hashtag, I don’t remember if it was trending or just in my timeline, called #whitegirlsaremagic. I clicked on it, and found it was racist and sexist dross. It was being promulgated in opposition to another hashtag, #blackgirlsaremagic. I clicked on that, and found a few model shots and borderline soft-core porn of black women. Armed with this impression, I set off to tweet in righteous anger about how much I disliked women being reduced to sex objects regardless of race. I was not just wrong in this moment, I was incoherently wrong. I had made my little mental model of what #blackgirlsaremagic was, and I had no clue that I had no clue what I was talking about. My 60-second impression of #whitegirlsaremagic was dead-on, but #blackgirlsaremagic didn’t fit in the last few tweets my browser had loaded.
  • I had been a victim of something the sociologists Alice Marwick and danah boyd call context collapse, where people create online culture meant for one in-group, but exposed to any number of out-groups without its original context by social-media platforms, where it can be recontextualized easily and accidentally.
  • Not everyone believes loving engagement is the best way to fight evil beliefs, but it has a good track record. Not everyone is in a position to engage safely with racists, sexists, anti-Semites, and homophobes, but for those who are, it’s a powerful tool. Engagement is not the one true answer to the societal problems destabilizing America today, but there is no one true answer. The way forward is as multifarious and diverse as America is, and a method of nonviolent confrontation and accountability, arising from my pacifism, is what I can bring to helping my society.
  • Here is your task, person on the internet, reader of journalism, speaker to the world on social media: You make the world now, in a way that you never did before. Your beliefs have a power they’ve never had in human history. You must learn to investigate with a scientific and loving mind not only what is true, but what is effective in the world. Right now we are a world of geniuses who constantly love to call each other idiots. But humanity is the most complicated thing we’ve found in the universe, and so far as we know, we’re the only thing even looking. We are miracles by the billions with powers and luxuries beyond the dreams of kings of old.
  • We are powerful creatures, but power must come with gentleness and responsibility. No one prepared us for this, no one trained us, no one came before us with an understanding of our world. There were hints, and wise people, and I lean on and cherish them. But their philosophies and imaginations can only take us so far. We have to build our own philosophies and imagine great futures for our world in order to have any futures at all. Let mercy guide us forward in these troubled times. Let yourself imagine, because imagination is the wellspring of hope. Here, in the beginning of the 21st century, hope is our duty to the future.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

How To Look Smart, Ctd - The Daily Dish | By Andrew Sullivan - 0 views

  • The Atlantic Home todaysDate();Tuesday, February 8, 2011Tuesday, February 8, 2011 Go Follow the Atlantic » Politics Presented by When Ronald Reagan Endorsed Ron Paul Joshua Green Epitaph for the DLC Marc Ambinder A Hard Time Raising Concerns About Egypt Chris Good Business Presented by Could a Hybrid Mortgage System Work? Daniel Indiviglio Fighting Bias in Academia Megan McArdle The Tech Revolution For Seniors Derek Thompson Culture Presented By 'Tiger Mother' Creates a New World Order James Fallows Justin Bieber: Daydream Believer James Parker <!-- /li
  • these questions tend to overlook the way IQ tests are designed. As a neuropsychologist who has administered hundreds of these measures, I can tell you that their structures reflect a deeply embedded bias toward intelligence as a function of reading skills
Javier E

The Lasting Lessons of John Conway's Game of Life - The New York Times - 0 views

  • “Because of its analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called ‘simulation games,’” Mr. Gardner wrote when he introduced Life to the world 50 years ago with his October 1970 column.
  • The Game of Life motivated the use of cellular automata in the rich field of complexity science, with simulations modeling everything from ants to traffic, clouds to galaxies. More trivially, the game attracted a cult of “Lifenthusiasts,” programmers who spent a lot of time hacking Life — that is, constructing patterns in hopes of spotting new Life-forms.
  • The tree of Life also includes oscillators, such as the blinker, and spaceships of various sizes (the glider being the smallest).
  • ...24 more annotations...
  • Patterns that didn’t change one generation to the next, Dr. Conway called still lifes — such as the four-celled block, the six-celled beehive or the eight-celled pond. Patterns that took a long time to stabilize, he called methuselahs.
  • The second thing Life shows us is something that Darwin hit upon when he was looking at Life, the organic version. Complexity arises from simplicity!
  • I first encountered Life at the Exploratorium in San Francisco in 1978. I was hooked immediately by the thing that has always hooked me — watching complexity arise out of simplicity.
  • Life shows you two things. The first is sensitivity to initial conditions. A tiny change in the rules can produce a huge difference in the output, ranging from complete destruction (no dots) through stasis (a frozen pattern) to patterns that keep changing as they unfold.
  • Life shows us complex virtual “organisms” arising out of the interaction of a few simple rules — so goodbye “Intelligent Design.”
  • I’ve wondered for decades what one could learn from all that Life hacking. I recently realized it’s a great place to try to develop “meta-engineering” — to see if there are general principles that govern the advance of engineering and help us predict the overall future trajectory of technology.
  • Melanie Mitchell— Professor of complexity, Santa Fe Institute
  • Given that Conway’s proof that the Game of Life can be made to simulate a Universal Computer — that is, it could be “programmed” to carry out any computation that a traditional computer can do — the extremely simple rules can give rise to the most complex and most unpredictable behavior possible. This means that there are certain properties of the Game of Life that can never be predicted, even in principle!
  • I use the Game of Life to make vivid for my students the ideas of determinism, higher-order patterns and information. One of its great features is that nothing is hidden; there are no black boxes in Life, so you know from the outset that anything that you can get to happen in the Life world is completely unmysterious and explicable in terms of a very large number of simple steps by small items.
  • In Thomas Pynchon’s novel “Gravity’s Rainbow,” a character says, “But you had taken on a greater and more harmful illusion. The illusion of control. That A could do B. But that was false. Completely. No one can do. Things only happen.”This is compelling but wrong, and Life is a great way of showing this.
  • In Life, we might say, things only happen at the pixel level; nothing controls anything, nothing does anything. But that doesn’t mean that there is no such thing as action, as control; it means that these are higher-level phenomena composed (entirely, with no magic) from things that only happen.
  • Stephen Wolfram— Scientist and C.E.O., Wolfram Research
  • Brian Eno— Musician, London
  • Bert Chan— Artificial-life researcher and creator of the continuous cellular automaton “Lenia,” Hong Kong
  • it did have a big impact on beginner programmers, like me in the 90s, giving them a sense of wonder and a kind of confidence that some easy-to-code math models can produce complex and beautiful results. It’s like a starter kit for future software engineers and hackers, together with Mandelbrot Set, Lorenz Attractor, et cetera.
  • if we think about our everyday life, about corporations and governments, the cultural and technical infrastructures humans built for thousands of years, they are not unlike the incredible machines that are engineered in Life.
  • In normal times, they are stable and we can keep building stuff one component upon another, but in harder times like this pandemic or a new Cold War, we need something that is more resilient and can prepare for the unpreparable. That would need changes in our “rules of life,” which we take for granted.
  • Rudy Rucker— Mathematician and author of “Ware Tetralogy,” Los Gatos, Calif.
  • That’s what chaos is about. The Game of Life, or a kinky dynamical system like a pair of pendulums, or a candle flame, or an ocean wave, or the growth of a plant — they aren’t readily predictable. But they are not random. They do obey laws, and there are certain kinds of patterns — chaotic attractors — that they tend to produce. But again, unpredictable is not random. An important and subtle distinction which changed my whole view of the world.
  • William Poundstone— Author of “The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge,” Los Angeles, Calif.
  • The Game of Life’s pulsing, pyrotechnic constellations are classic examples of emergent phenomena, introduced decades before that adjective became a buzzword.
  • Fifty years later, the misfortunes of 2020 are the stuff of memes. The biggest challenges facing us today are emergent: viruses leaping from species to species; the abrupt onset of wildfires and tropical storms as a consequence of a small rise in temperature; economies in which billions of free transactions lead to staggering concentrations of wealth; an internet that becomes more fraught with hazard each year
  • Looming behind it all is our collective vision of an artificial intelligence-fueled future that is certain to come with surprises, not all of them pleasant.
  • The name Conway chose — the Game of Life — frames his invention as a metaphor. But I’m not sure that even he anticipated how relevant Life would become, and that in 50 years we’d all be playing an emergent game of life and death.
Javier E

The Navy's USS Gabrielle Giffords and the Future of Work - The Atlantic - 0 views

  • Minimal manning—and with it, the replacement of specialized workers with problem-solving generalists—isn’t a particularly nautical concept. Indeed, it will sound familiar to anyone in an organization who’s been asked to “do more with less”—which, these days, seems to be just about everyone.
  • Ten years from now, the Deloitte consultant Erica Volini projects, 70 to 90 percent of workers will be in so-called hybrid jobs or superjobs—that is, positions combining tasks once performed by people in two or more traditional roles.
  • If you ask Laszlo Bock, Google’s former culture chief and now the head of the HR start-up Humu, what he looks for in a new hire, he’ll tell you “mental agility.
  • ...40 more annotations...
  • “What companies are looking for,” says Mary Jo King, the president of the National Résumé Writers’ Association, “is someone who can be all, do all, and pivot on a dime to solve any problem.”
  • The phenomenon is sped by automation, which usurps routine tasks, leaving employees to handle the nonroutine and unanticipated—and the continued advance of which throws the skills employers value into flux
  • Or, for that matter, on the relevance of the question What do you want to be when you grow up?
  • By 2020, a 2016 World Economic Forum report predicted, “more than one-third of the desired core skill sets of most occupations” will not have been seen as crucial to the job when the report was published
  • I asked John Sullivan, a prominent Silicon Valley talent adviser, why should anyone take the time to master anything at all? “You shouldn’t!” he replied.
  • Minimal manning—and the evolution of the economy more generally—requires a different kind of worker, with not only different acquired skills but different inherent abilities
  • It has implications for the nature and utility of a college education, for the path of careers, for inequality and employability—even for the generational divide.
  • Then, in 2001, Donald Rumsfeld arrived at the Pentagon. The new secretary of defense carried with him a briefcase full of ideas from the corporate world: downsizing, reengineering, “transformational” technologies. Almost immediately, what had been an experimental concept became an article of faith
  • But once cadets got into actual command environments, which tend to be fluid and full of surprises, a different picture emerged. “Psychological hardiness”—a construct that includes, among other things, a willingness to explore “multiple possible response alternatives,” a tendency to “see all experience as interesting and meaningful,” and a strong sense of self-confidence—was a better predictor of leadership ability in officers after three years in the field.
  • Because there really is no such thing as multitasking—just a rapid switching of attention—I began to feel overstrained, put upon, and finally irked by the impossible set of concurrent demands. Shouldn’t someone be giving me a hand here? This, Hambrick explained, meant I was hitting the limits of working memory—basically, raw processing power—which is an important aspect of “fluid intelligence” and peaks in your early 20s. This is distinct from “crystallized intelligence”—the accumulated facts and know-how on your hard drive—which peaks in your 50
  • Others noticed the change but continued to devote equal attention to all four tasks. Their scores fell. This group, Hambrick found, was high in “conscientiousness”—a trait that’s normally an overwhelming predictor of positive job performance. We like conscientious people because they can be trusted to show up early, double-check the math, fill the gap in the presentation, and return your car gassed up even though the tank was nowhere near empty to begin with. What struck Hambrick as counterintuitive and interesting was that conscientiousness here seemed to correlate with poor performance.
  • he discovered another correlation in his test: The people who did best tended to score high on “openness to new experience”—a personality trait that is normally not a major job-performance predictor and that, in certain contexts, roughly translates to “distractibility.”
  • To borrow the management expert Peter Drucker’s formulation, people with this trait are less focused on doing things right, and more likely to wonder whether they’re doing the right things.
  • High in fluid intelligence, low in experience, not terribly conscientious, open to potential distraction—this is not the classic profile of a winning job candidate. But what if it is the profile of the winning job candidate of the future?
  • One concerns “grit”—a mind-set, much vaunted these days in educational and professional circles, that allows people to commit tenaciously to doing one thing well
  • These ideas are inherently appealing; they suggest that dedication can be more important than raw talent, that the dogged and conscientious will be rewarded in the end.
  • he studied West Point students and graduates.
  • Traditional measures such as SAT scores and high-school class rank “predicted leader performance in the stable, highly regulated environment of West Point” itself.
  • It would be supremely ironic if the advance of the knowledge economy had the effect of devaluing knowledge. But that’s what I heard, recurrentl
  • “Fluid, learning-intensive environments are going to require different traits than classical business environments,” I was told by Frida Polli, a co-founder of an AI-powered hiring platform called Pymetrics. “And they’re going to be things like ability to learn quickly from mistakes, use of trial and error, and comfort with ambiguity.”
  • “We’re starting to see a big shift,” says Guy Halfteck, a people-analytics expert. “Employers are looking less at what you know and more and more at your hidden potential” to learn new things
  • advice to employers? Stop hiring people based on their work experience. Because in these environments, expertise can become an obstacle.
  • “The Curse of Expertise.” The more we invest in building and embellishing a system of knowledge, they found, the more averse we become to unbuilding it.
  • All too often experts, like the mechanic in LePine’s garage, fail to inspect their knowledge structure for signs of decay. “It just didn’t occur to him,” LePine said, “that he was repeating the same mistake over and over.
  • The devaluation of expertise opens up ample room for different sorts of mistakes—and sometimes creates a kind of helplessness.
  • Aboard littoral combat ships, the crew lacks the expertise to carry out some important tasks, and instead has to rely on civilian help
  • Meanwhile, the modular “plug and fight” configuration was not panning out as hoped. Converting a ship from sub-hunter to minesweeper or minesweeper to surface combatant, it turned out, was a logistical nightmare
  • So in 2016 the concept of interchangeability was scuttled for a “one ship, one mission” approach, in which the extra 20-plus sailors became permanent crew members
  • “As equipment breaks, [sailors] are required to fix it without any training,” a Defense Department Test and Evaluation employee told Congress. “Those are not my words. Those are the words of the sailors who were doing the best they could to try to accomplish the missions we gave them in testing.”
  • These results were, perhaps, predictable given the Navy’s initial, full-throttle approach to minimal manning—and are an object lesson on the dangers of embracing any radical concept without thinking hard enough about the downsides
  • a world in which mental agility and raw cognitive speed eclipse hard-won expertise is a world of greater exclusion: of older workers, slower learners, and the less socially adept.
  • if you keep going down this road, you end up with one really expensive ship with just a few people on it who are geniuses … That’s not a future we want to see, because you need a large enough crew to conduct multiple tasks in combat.
  • hat does all this mean for those of us in the workforce, and those of us planning to enter it? It would be wrong to say that the 10,000-hours-of-deliberate-practice idea doesn’t hold up at all. In some situations, it clearly does
  • A spinal surgery will not be performed by a brilliant dermatologist. A criminal-defense team will not be headed by a tax attorney. And in tech, the demand for specialized skills will continue to reward expertise handsomely.
  • But in many fields, the path to success isn’t so clear. The rules keep changing, which means that highly focused practice has a much lower return
  • In uncertain environments, Hambrick told me, “specialization is no longer the coin of the realm.”
  • It leaves us with lifelong learning,
  • I found myself the target of career suggestions. “You need to be a video guy, an audio guy!” the Silicon Valley talent adviser John Sullivan told me, alluding to the demise of print media
  • I found the prospect of starting over just plain exhausting. Building a professional identity takes a lot of resources—money, time, energy. After it’s built, we expect to reap gains from our investment, and—let’s be honest—even do a bit of coasting. Are we equipped to continually return to apprentice mode? Will this burn us out?
  • Everybody I met on the Giffords seemed to share that mentality. They regarded every minute on board—even during a routine transit back to port in San Diego Harbor—as a chance to learn something new.
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • There will be no place to go but the unemployment line.
  • There will be no place to go but the unemployment line.
  • at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
  • ...34 more annotations...
  • The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor marke
  • The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently. In other words, the Luddites weren't wrong. They were just 200 years too early
  • Slowly but steadily,&nbsp;labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
  • Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship," she says, "is already becoming the new normal."
  • robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
  • Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
  • while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true.
  • Third, as more people compete for fewer jobs, we'd expect to see middle-class&nbsp;incomes flatten&nbsp;in a race to the bottom.
  • The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy?
  • if automation were displacing labor, we'd expect to see&nbsp;a steady decline in the share of the population that's employed.
  • Second, we'd expect to see&nbsp;fewer job openings&nbsp;than in the past.
  • In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change
  • Fourth, with consumption stagnant, we'd expect to see&nbsp;corporations stockpile more cash and, fearing weaker sales, invest less&nbsp;in new products and new factories
  • Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
  • We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
  • Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to&nbsp;produce&nbsp;goods and services, but they can't consume them
  • in another sense, we should be&nbsp;very&nbsp;alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.
  • the first jobs to go&nbsp;will be middle-skill jobs. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles
  • in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
  • In fact,&nbsp;there's even a digital sports writer. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too
  • Doctors should probably be worried as well. Remember Watson, the&nbsp;Jeopardy!-playing computer? It's now being fed&nbsp;millions of pages of medical information so that it can help physicians do a better job&nbsp;of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
  • Take driverless cars.
  • The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced
  • There will be no place to go but the unemployment lin
  • we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
  • The modern economy is complex, and most of these trends have multiple causes.
  • we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as&nbsp;The Economist's Ryan Avent puts it, "redistribution, and a lot of it."
  • would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest?
  • Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
  • &nbsp;economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"
  • In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society.
  • it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
  • When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see
  • A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
lenaurick

Why your brain loves procrastination - Vox - 0 views

  • Roughly 5 percent of the population has such a problem with chronic procrastination that it seriously affects their lives.
  • Conventional wisdom has long suggested that procrastination is all about poor time management and willpower. But more recently, psychologists have been discovering that it may have more to do with how our brains and emotions work.
  • Procrastination, they've realized, appears to be a coping mechanism. When people procrastinate, they're avoiding emotionally unpleasant tasks and instead doing something that provides a temporary mood boost. The procrastination itself then causes shame and guilt —&nbsp;which in turn leads people to procrastinate even further, creating a vicious cycle.
  • ...22 more annotations...
  • For example, psychologist Tim Pychyl has co-authored a paper showing that students who forgave themselves for procrastinating on a previous exam were actually less likely to procrastinate on their next test. He and others have also found that people prone to procrastination are, overall, less compassionate toward themselves —&nbsp;an insight that points to ways to help.
  • But psychologists see procrastination as a misplaced coping mechanism, as an emotion-focused coping strategy. [People who procrastinate are] using avoidance to cope with emotions, and many of them are unconscious emotions.
  • they found our brain processes present self and future self differently. We think of future self more like a stranger.
  • I used to procrastinate, and now I don't, because I got all these wicked strategies. And it’s every level: some of it’s behavioral, some of it’s emotional, some of it’s cognitive.
  • Whenever we face a task, we’re not going to feel like doing it. Somehow adults believe that their motivational state has to match the task at hand. We say, "I’m not in the mood." Our motivational state rarely matches the task at hand, so we always have to use self-regulation skills to bring our focus to it. So at first it will be, "Okay, I recognize that I don’t feel like it, but I’m just gonna get started."
  • We know from psychological research by [Andrew] Elliot and others that progress on our goals feeds our well-being. So the most important thing you can do is bootstrap a little progress. Get a little progress, and that’s going to fuel your well-being and your motivation.
  • mplementation intentions take the form of "If, then." "If the phone rings, then I’m not going to answer it." "If my friends call me to say we’re going out, I’m going to say no."&nbsp;So you’ve already made this pre-commitment.
  • OHIO rule: only handle it once. And I’m like that with email. I look at that email and say, "I can reply to it now, or I can throw it out," but there’s not much of a middle ground. I’m not going to save it for a while.
  • We [think] that people will make less procrastinatory choices now because they’ll realize that "It’s me in the future we’re talking about here. I’m going to be under the gun."
  • Roughly 5 percent of the population has such a problem with chronic procrastination that it seriously affects their lives
  • Conventional wisdom has long suggested that procrastination is all about poor time management and willpower. But more recently, psychologists have been discovering that it may have more to do with how our brains and emotions work.
  • Procrastination, they've realized, appears to be a coping mechanism. When people procrastinate, they're avoiding emotionally unpleasant tasks and instead doing something that provides a temporary mood boost. The procrastination itself then causes shame and guilt —&nbsp;which in turn leads people to procrastinate even further, creating a vicious cycle.
  • But psychologists see procrastination as a misplaced coping mechanism, as an emotion-focused coping strategy. [People who procrastinate are] using avoidance to cope with emotions, and many of them are unconscious emotions.
  • Recently we’ve been doing research that relates to the work on "present self"/"future self" because what’s happening with procrastination is that "present self" is always trumping "future self."
  • He’s shown that in experimental settings if someone sees their own picture digitally aged, they’re more likely to allocate funds to retirement. When [the researchers] did the fMRI studies, they found our brain processes present self and future self differently. We think of future self more like a stranger.
  • The people who see the present and future self as more overlapping have more self-continuity and report less procrastination.
  • e [think] that people will make less procrastinatory choices now because they’ll realize that "It’s me in the future we’re talking about here. I’m going to be under the gun."
  • Our motivational state rarely matches the task at hand, so we always have to use self-regulation skills to bring our focus to it. So at first it will be, "Okay, I recognize that I don’t feel like it, but I’m just gonna get started."
  • Implementation intentions take the form of "If, then." "If the phone rings, then I’m not going to answer it." "If my friends call me to say we’re going out, I’m going to say no."&nbsp;So you’ve already made this pre-commitment.
  • Because it’s all about self-deception — you aren’t aware that it’s going to cost you, but you are.
  • OHIO rule: only handle it once. And I’m like that with email. I look at that email and say, "I can reply to it now, or I can throw it out," but there’s not much of a middle ground. I’m not going to save it for a while.
  • And it’s every level: some of it’s behavioral, some of it’s emotional, some of it’s cognitive
Javier E

Where We Went Wrong | Harvard Magazine - 0 views

  • John Kenneth Galbraith assessed the trajectory of America’s increasingly “affluent society.” His outlook was not a happy one. The nation’s increasingly evident material prosperity was not making its citizens any more satisfied. Nor, at least in its existing form, was it likely to do so
  • One reason, Galbraith argued, was the glaring imbalance between the opulence in consumption of private goods and the poverty, often squalor, of public services like schools and parks
  • nother was that even the bountifully supplied private goods often satisfied no genuine need, or even desire; a vast advertising apparatus generated artificial demand for them, and satisfying this demand failed to provide meaningful or lasting satisfaction.
  • ...28 more annotations...
  • economist J. Bradford DeLong ’82, Ph.D. ’87, looking back on the twentieth century two decades after its end, comes to a similar conclusion but on different grounds.
  • DeLong, professor of economics at Berkeley, looks to matters of “contingency” and “choice”: at key junctures the economy suffered “bad luck,” and the actions taken by the responsible policymakers were “incompetent.”
  • these were “the most consequential years of all humanity’s centuries.” The changes they saw, while in the first instance economic, also “shaped and transformed nearly everything sociological, political, and cultural.”
  • DeLong’s look back over the twentieth century energetically encompasses political and social trends as well; nor is his scope limited to the United States. The result is a work of strikingly expansive breadth and scope
  • labeling the book an economic history fails to convey its sweeping frame.
  • The century that is DeLong’s focus is what he calls the “long twentieth century,” running from just after the Civil War to the end of the 2000s when a series of events, including the biggest financial crisis since the 1930s followed by likewise the most severe business downturn, finally rendered the advanced Western economies “unable to resume economic growth at anything near the average pace that had been the rule since 1870.
  • d behind those missteps in policy stood not just failures of economic thinking but a voting public that reacted perversely, even if understandably, to the frustrations poor economic outcomes had brought them.
  • Within this 140-year span, DeLong identifies two eras of “El Dorado” economic growth, each facilitated by expanding globalization, and each driven by rapid advances in technology and changes in business organization for applying technology to economic ends
  • from 1870 to World War I, and again from World War II to 197
  • fellow economist Robert J. Gordon ’62, who in his monumental treatise on The Rise and Fall of American Economic Growth (reviewed in “How America Grew,” May-June 2016, page 68) hailed 1870-1970 as a “special century” in this regard (interrupted midway by the disaster of the 1930s).
  • Gordon highlighted the role of a cluster of once-for-all-time technological advances—the steam engine, railroads, electrification, the internal combustion engine, radio and television, powered flight
  • Pessimistic that future technological advances (most obviously, the computer and electronics revolutions) will generate productivity gains to match those of the special century, Gordon therefore saw little prospect of a return to the rapid growth of those halcyon days.
  • DeLong instead points to a series of noneconomic (and non-technological) events that slowed growth, followed by a perverse turn in economic policy triggered in part by public frustration: In 1973 the OPEC cartel tripled the price of oil, and then quadrupled it yet again six years later.
  • For all too many Americans (and citizens of other countries too), the combination of high inflation and sluggish growth meant that “social democracy was no longer delivering the rapid progress toward utopia that it had delivered in the first post-World War II generation.”
  • Frustration over these and other ills in turn spawned what DeLong calls the “neoliberal turn” in public attitudes and economic policy. The new economic policies introduced under this rubric “did not end the slowdown in productivity growth but reinforced it.
  • the tax and regulatory changes enacted in this new climate channeled most of what economic gains there were to people already at the top of the income scale
  • Meanwhile, progressive “inclusion” of women and African Americans in the economy (and in American society more broadly) meant that middle- and lower-income white men saw even smaller gains—and, perversely, reacted by providing still greater support for policies like tax cuts for those with far higher incomes than their own.
  • Daniel Bell’s argument in his 1976 classic The Cultural Contradictions of Capitalism. Bell famously suggested that the very success of a capitalist economy would eventually undermine a society’s commitment to the values and institutions that made capitalism possible in the first plac
  • In DeLong’s view, the “greatest cause” of the neoliberal turn was “the extraordinary pace of rising prosperity during the Thirty Glorious Years, which raised the bar that a political-economic order had to surpass in order to generate broad acceptance.” At the same time, “the fading memory of the Great Depression led to the fading of the belief, or rather recognition, by the middle class that they, as well as the working class, needed social insurance.”
  • what the economy delivered to “hard-working white men” no longer matched what they saw as their just deserts: in their eyes, “the rich got richer, the unworthy and minority poor got handouts.”
  • As Bell would have put it, the politics of entitlement, bred by years of economic success that so many people had come to take for granted, squeezed out the politics of opportunity and ambition, giving rise to the politics of resentment.
  • The new era therefore became “a time to question the bourgeois virtues of hard, regular work and thrift in pursuit of material abundance.”
  • DeLong’s unspoken agenda would surely include rolling back many of the changes made in the U.S. tax code over the past half-century, as well as reinvigorating antitrust policy to blunt the dominance, and therefore outsize profits, of the mega-firms that now tower over key sectors of the economy
  • He would also surely reverse the recent trend moving away from free trade. Central bankers should certainly behave like Paul Volcker (appointed by President Carter), whose decisive action finally broke the 1970s inflation even at considerable economic cost
  • Not only Galbraith’s main themes but many of his more specific observations as well seem as pertinent, and important, today as they did then.
  • What will future readers of Slouching Towards Utopia conclude?
  • If anything, DeLong’s narratives will become more valuable as those events fade into the past. Alas, his description of fascism as having at its center “a contempt for limits, especially those implied by reason-based arguments; a belief that reality could be altered by the will; and an exaltation of the violent assertion of that will as the ultimate argument” will likely strike a nerve with many Americans not just today but in years to come.
  • what about DeLong’s core explanation of what went wrong in the latter third of his, and our, “long century”? I predict that it too will still look right, and important.
Javier E

Guns, Germs, and The Future of Us - Wyatt Edward Gates - Medium - 0 views

  • ared Daimond’s seminal work Guns, Germs, and Steel has many flaws, but it provides some useful anecdotes about how narrative and consciousness shapes human organization progresses
  • Past critical transformations of thought can help us see how we need to transform ourselves now in order to survive the future.
  • something both ancient and immediate: the way we define who is in our tribe plays a critical role in what kind of social organization we can build and maintain
  • ...25 more annotations...
  • You can’t have a blood family of 300 million, nor even a large enough one to do things like build an agrarian society
  • In order to have large cities built on agrarianism it was necessary not only to innovate technology, but to transform our very consciousness as it related to how we defined what a person was, both ourselves and others
  • Instead of needing to have real, flowing blood with common DNA from birth, it was merely necessary to be among the same abstract family organized under a king of some kind — a kind of stand in for the father or patriarch. We developed law and law enforcement as abstract disembodied voices of the father. This allowed total strangers without any family ties to interact in the same society in a constructive and organized way. Thus: civilization as we know it
  • Those ancient polities have developed finally into the Nation, a kind of tribe so fully abstracted that you can be of any blood and language and religion and still function within it.
  • So, too, are all other forms of human separation — and the opposition and conflicts they spawn — illusory in nature. We moved beyond blood, but then it was language or religion or fealty that made it impossible to work together, and we warred over that
  • we’re told these borders mean everything, that they are real and urgent and demand constant sacrifice to maintain.
  • why is that border there? Why borders?
  • We’re stuck in a mode of thinking that’s no longer sensible. There isn’t a reason for borders. There never really was, but now more than ever we have no utility for them, no need for them
  • What humanity has to do is wake up to the reality of post-tribalism. This means seeing through all these invented borders to the truth that we are all people, we are all fundamentally the same, and we can all learn to live with one another.
  • It was the idea of necessary conflict based on blood that preceded the fights that appeared to justify the belief in that blood-based conflict.
  • Nations have saturated the entire globe. There are no more frontiers. It’s all Nations butting up against one another.
  • We are all people of a similar nature and we do have the option to relate to one another as people for the sake of saving our shared homes and futures. We all hunger and thirst and become lonely, we all laugh and weep in the same language. Stripped of confounding symbols we are undivided.
  • There are a lot of people upset about the illusion of borders. They want a different reality, one in which there are Good Tribes (their tribe) and Bad Tribes (all the other ones).
  • but the world is already so mixed together they can’t draw those borders anymore. Hence: fascism.
  • There are no firm foundations for defining this tribe, however, so he’s left to cobble together some kind of ad hoc notion of in- and out-group. Like a magpie he collects ways of dividing people as appeals to his caprice: race, sex, Nation, etc., but there’s no greater sense to it, so it’s all arbitrary, all a mess.
  • No amount of magical thinking from conservatives can change the reality of globalism, however; what one Nation does to pollute will affect us all, and that is according to the laws of physics. No political movement can change those physics. We have to adapt or perish.
  • a key part of it is a simple lack of imagination. He just doesn’t realize there’s an option to not have borders, because his entire consciousness is married to the idea of of-me and not-of-me, Us and Them, and if there is no Them there can’t be an Us, and therefore life stops making sense
  • What has to be true if there are no tribes? We have no need to discriminate among who we may love. Loving and caring for all people as if they were blood family is the path forward
  • There needs to be a new story for us to share. It’s not enough to stop believing in the old way of borders, we have to actively seek out a new way of thinking and speaking and living that reflects the world as it is and as it can be.
  • there are others who have more tangible investments in borders: Those who have grown fat off the conflicts driven by these invented borders don’t want us to see how pointless it all is. These billionaires and presidents and kings want us to keep fighting against one another over the borders they so lazily define because it gives them a means of power and control.
  • We have to be ready for their opposition, however. They’ll do what they can to force us to act as if their borders are real. We don’t need to listen, though we do need to be ready to sacrifice.
  • Without a globally-coordinated response we can’t resolve a globally-driven problem such as climate change. If we can grant the humanity of all people we can start to imagine ways of relating to one another that aren’t opposed and antagonistic, but which are cooperative and aimed at harmony.
  • This transformation of consciousness must happen in our own hearts and minds before it can happen in concert.
  • the Nation has already been shown to be unnecessary because of social globalism. Pick a major city on earth and you’ll find every kind of person living together in peace! Not perfect peace, but not constant and unavoidable war, and that is what counts.
  • We can’t keep pretending as if borders matter when we can so clearly see that they don’t, but we can’t just have no story at all, there must be a way of contextualizing a future without borders. I don’t know what that story is, exactly, but I believe it is something like love writ large. Once we’re ready to start telling it we can start living it.
Javier E

Opinion | We Have Two Visions of the Future, and Both Are Wrong - The New York Times - 0 views

  • these fears can no longer be confined to a fanatical fringe of gun-toting survivalists. The relentless onslaught of earthshaking crises, unfolding against the backdrop of flash floods and forest fires, has steadily pushed apocalyptic sentiment into the mainstream. When even the head of the United Nations warns that rising sea levels could unleash “a mass exodus on a biblical scale,” it is hard to remain sanguine about the state of the world. One survey found that over half of young adults now believe that “humanity is doomed” and “the future is frightening.”
  • At the same time, recent years have also seen the resurgence of a very different kind of narrative. Exemplified by a slew of best-selling books and viral TED talks, this view tends to downplay the challenges we face and instead insists on the inexorable march of human progress. If doomsday thinkers worry endlessly that things are about to get a lot worse, the prophets of progress maintain that things have only been getting better — and are likely to continue to do so in the future.
  • If things are really getting better, there is clearly no need for transformative change to confront the most pressing problems of our time. So long as we stick to the script and keep our faith in the redeeming qualities of human ingenuity and technological innovation, all our problems will eventually resolve themselves.
  • ...9 more annotations...
  • It is easy to understand the appeal of such one-sided tales. As human beings, we seem to prefer to impose clear and linear narratives on a chaotic and unpredictable reality; ambiguity and contradiction are much harder to live with.
  • To truly grasp the complex nature of our current time, we need first of all to embrace its most terrifying aspect: its fundamental open-endedness. It is precisely this radical uncertainty — not knowing where we are and what lies ahead — that gives rise to such existential anxiety.
  • Anthropologists have a name for this disturbing type of experience: liminality
  • liminality originally referred to the sense of disorientation that arises during a rite of passage. In a traditional coming-of-age ritual, for instance, it marks the point at which the adolescent is no longer considered a child but is not yet recognized as an adult — betwixt and between
  • We are ourselves in the midst of a painful transition, a sort of interregnum, as the Italian political theorist Antonio Gramsci famously called it, between an old world that is dying and a new one that is struggling to be born. Such epochal shifts are inevitably fraught with danger
  • the great upheavals in world history can equally be seen “as genuine signs of vitality” that “clear the ground” of discredited ideas and decaying institutions. “The crisis,” he wrote, “is to be regarded as a new nexus of growth.”
  • Once we embrace this Janus-faced nature of our times, at once frightening yet generative, a very different vision of the future emerges.
  • we see phases of relative calm punctuated every so often by periods of great upheaval. These crises can be devastating, but they are also the drivers of history.
  • even the collapse of modern civilization — but it may also open up possibilities for transformative change
Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
Javier E

Book review - The Dawn of Everything: A New History of Humanity | The Inquisitive Biolo... - 0 views

  • Every few years, it seems, there is a new bestselling Big History book. And not infrequently, they have rather grandiose titles.
  • , I hope to convince you why I think this book will stand the test of time better.
  • First, rather than one author’s pet theory, The Dawn of Everything is the brainchild of two outspoken writers: anthropologist David Graeber (a figurehead in the Occupy Wall Street movement and author of e.g. Bullshit Jobs) and archaeologist David Wengrow (author of e.g. What Makes Civilization?). I expect a large part of their decade-long collaboration consisted of shooting holes in each other’s arguments
  • ...24 more annotations...
  • Colonisation exposed us to new ideas that shocked and confused us. Graeber &amp; Wengrow focus on the French coming into contact with Native Americans in Canada, and in particular on Wendat Confederacy philosopher–statesman Kandiaronk as an example of European traders, missionaries, and intellectuals debating with, and being criticized by indigenous people. Historians have downplayed how much these encounters shaped Enlightenment ideas.
  • this thought-provoking book is armed to the teeth with fascinating ideas and interpretations that go against mainstream thinking
  • ather than yet another history book telling you how humanity got here, they take their respective disciplines to task for dealing in myths.
  • Its legacy, shaped via several iterations, is the modern textbook narrative: hunter-gathering was replaced by pastoralism and then farming; the agricultural revolution resulted in larger populations producing material surpluses; these allowed for specialist occupations but also needed bureaucracies to share and administer them to everyone; and this top-down control led to today’s nation states. Ta-daa!
  • this simplistic tale of progress ignores and downplays that there was nothing linear or inevitable about where we have ended up.
  • ake agriculture. Rather than humans enthusiastically entering into what Harari in Sapiens called a Faustian bargain with crops, there were many pathways and responses
  • Experiments show that plant domestication could have been achieved in as little as 20–30 years, so the fact that cereal domestication here took some 3,000 years questions the notion of an agricultural “revolution”. Lastly, this book includes many examples of areas where agriculture was purposefully rejected. Designating such times and places as “pre-agricultural” is misleading, write the authors, they were anti-agricultural.
  • The idea that agriculture led to large states similarly needs revision
  • correlation is not causation, and some 15–20 additional centres of domestication have since been identified that followed different paths. Some cities have previously remained hidden in the sediments of ancient river deltas until revealed by modern remote-sensing technology.
  • “extensive agriculture may thus have been an outcome, not a cause, of urbanization”
  • And cities did not automatically imply social stratification. The Dawn of Everything fascinates with its numerous examples of large settlements without ruling classes, such as Ukrainian mega-sites, the Harappan civilization, or Mexican city-states.
  • These instead relied on collective decision-making through assemblies or councils, which questions some of the assumptions of evolutionary psychology about scale: that larger human groups require complex (i.e. hierarchical) systems to organize them.
  • e what is staring them in the face
  • humans have always been very capable of consciously experimenting with different social arrangements. And—this is rarely acknowledged—they did so on a seasonal basis, spending e.g. part of the year settled in large communal groups under a leader, and another part as small, independently roving bands.
  • Throughout, Graeber &amp; Wengrow convincingly argue that the only thing we can say about our ancestors is that “there is no single pattern. The only consistent phenomenon is the very fact of alteration […] If human beings, through most of our history, have moved back and forth fluidly between different social arrangements […] maybe the real question should be ‘how did we get stuck?
  • Next to criticism, the authors put out some interesting ideas of their own, of which I want to quickly highlight two.
  • The first is that some of the observed variations in social arrangements resulted from schismogenesis. Anthropologist Gregory Bateson coined this term in the 1930s to describe how people define themselves against or in opposition to others, adopting behaviours and attitudes that are different.
  • The second idea is that states can be described in terms of three elementary forms of domination: control of violence, control of information, and individual charisma, which express themselves as sovereignty, administration, and competitive politics.
  • Our current states combine these three, and thus we have state-endorsed violence in the form of law enforcement and armies, bureaucracy, and the popularity contests we call elections in some countries, and monarchs, oligarchs, or tyrants in other countries. But looking at history, there is no reason why this should be and the authors provide examples of societies that showed only one or two such forms of control
  • Asking which past society most resembles today’s is the wrong question to ask. It risks slipping into an exercise in retrofitting, “which makes us scour the ancient world for embryonic versions of our modern nation states”
  • I have left unmentioned several other topics: the overlooked role of women, the legacy of Rousseau’s and Hobbes’s ideas, the origins of inequality and the flawed assumptions hiding behind that question
  • There are so many historical details and delights hiding between these covers that I was thoroughly enthralle
  • If you have any interest in big history, archaeology, or anthropology, this book is indispensable. I am confident that the questions and critiques raised here will remain relevant for a long time to come.
  • I was particularly impressed by the in-depth critique by worbsintowords on his YouTube channel What is Politics? of (so far) five videos
Javier E

Our Machine Masters - NYTimes.com - 0 views

  • the smart machines of the future won’t be humanlike geniuses like HAL 9000 in the movie “2001: A Space Odyssey.” They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. “Everything that we formerly electrified we will now cognitize,” Kelly writes. Even more than today, we’ll lead our lives enmeshed with machines that do some of our thinking tasks for us.
  • This artificial intelligence breakthrough, he argues, is being driven by cheap parallel computation technologies, big data collection and better algorithms. The upshot is clear, “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add A.I.”
  • Two big implications flow from this. The first is sociological. If knowledge is power, we’re about to see an even greater concentration of power.
  • ...14 more annotations...
  • in 2001, the top 10 websites accounted for 31 percent of all U.S. page views, but, by 2010, they accounted for 75 percent of them.
  • The Internet has created a long tail, but almost all the revenue and power is among the small elite at the head.
  • Advances in artificial intelligence will accelerate this centralizing trend. That’s because A.I. companies will be able to reap the rewards of network effects. The bigger their network and the more data they collect, the more effective and attractive they become.
  • As a result, our A.I. future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”
  • engineers at a few gigantic companies will have vast-though-hidden power to shape how data are collected and framed, to harvest huge amounts of information, to build the frameworks through which the rest of us make decisions and to steer our choices. If you think this power will be used for entirely benign ends, then you have not read enough history.
  • The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do
  • On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments.
  • For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math.
  • In the age of smart machines, we’re not human because we have big brains. We’re human because we have social skills, emotional capacities and moral intuitions.
  • I could paint two divergent A.I. futures, one deeply humanistic, and one soullessly utilitarian.
  • In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.
  • In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it
  • In the humanistic one, machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much.
  • In the current issue of Wired, the technology writer Kevin Kelly says that we had all better get used to this level of predictive prowess. Kelly argues that the age of artificial intelligence is finally at hand.
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&amp;
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
Javier E

Will ChatGPT Kill the Student Essay? - The Atlantic - 0 views

  • Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.
  • The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up
  • “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.”
  • ...18 more annotations...
  • In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days, is a famously proud illiterate. “I would never read a book,” he once told an interviewer. “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
  • Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before.
  • the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.
  • These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences
  • Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.
  • As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide.
  • As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone
  • The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus, but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust.
  • Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine.
  • the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations.
  • The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major?
  • Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed. The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.
  • now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems
  • Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated?
  • despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
  • The humanists will need to understand natural-language processing because it’s the future of language
  • that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.
  • But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.
Javier E

Elon studies future of "Generation Always-On" - 1 views

  • Elon studies the future of "Generation Always-On"
  • By the year 2020, it is expected that youth of the “always-on generation,” brought up from childhood with a continuous connection to each other and to information, will be nimble, quick-acting multitaskers who count on the Internet as their external brain and who approach problems in a different way from their elders. "There is no doubt that brains are being rewired,"
  • the Internet Center, refers to the teens-to-20s age group born since the turn of the century as Generation AO, for “always-on." “They have grown up in a world that has come to offer them instant access to nearly the entirety of human knowledge, and incredible opportunities to connect, create and collaborate,"
  • ...10 more annotations...
  • some said they are already witnessing deficiencies in young peoples’ abilities to focus their attention, be patient and think deeply. Some experts expressed concerns that trends are leading to a future in which most people become shallow consumers of information, endangering society."
  • Many of the respondents in this survey predict that Gen AO will exhibit a thirst for instant gratification and quick fixes and a lack of patience and deep-thinking ability due to what one referred to as “fast-twitch wiring.”
  • “The replacement of memorization by analysis will be the biggest boon to society since the coming of mass literacy in the late 19th to early 20th century.” — Paul Jones, University of North Carolina-Chapel Hill
  • “Teens find distraction while working, distraction while driving, distraction while talking to the neighbours. Parents and teachers will have to invest major time and efforts into solving this issue – silence zones, time-out zones, meditation classes without mobile, lessons in ignoring people.”
  • “Society is becoming conditioned into dependence on technology in ways that, if that technology suddenly disappears or breaks down, will render people functionally useless. What does that mean for individual and social resiliency?
  • “Short attention spans resulting from quick interactions will be detrimental to focusing on the harder problems and we will probably see a stagnation in many areas: technology, even social venues such as literature. The people who will strive and lead the charge will be the ones able to disconnect themselves to focus.”
  • “The underlying issue is that they will become dependent on the Internet in order to solve problems and conduct their personal, professional, and civic lives. Thus centralized powers that can control access to the Internet will be able to significantly control future generations. It will be much as in Orwell's 1984, where control was achieved by using language to shape and limit thought, so future regimes may use control of access to the Internet to shape and limit thought.”
  • “Increasingly, teens and young adults rely on the first bit of information they find on a topic, assuming that they have found the ‘right’ answer, rather than using context and vetting/questioning the sources of information to gain a holistic view of a topic.”
  • “Parents and kids will spend less time developing meaningful and bonded relationships in deference to the pursuit and processing of more and more segmented information competing for space in their heads, slowly changing their connection to humanity.”
  • “It’s simply not possible to discuss, let alone form societal consensus around major problems without lengthy, messy conversations about those problems. A generation that expects to spend 140 or fewer characters on a topic and rejects nuance is incapable of tackling these problems.”
Javier E

The Chatbots Are Here, and the Internet Industry Is in a Tizzy - The New York Times - 0 views

  • He cleared his calendar and asked employees to figure out how the technology, which instantly provides comprehensive answers to complex questions, could benefit Box, a cloud computing company that sells services that help businesses manage their online data.
  • Mr. Levie’s reaction to ChatGPT was typical of the anxiety — and excitement — over Silicon Valley’s new new thing. Chatbots have ignited a scramble to determine whether their technology could upend the economics of the internet, turn today’s powerhouses into has-beens or create the industry’s next giants.
  • Cloud computing companies are rushing to deliver chatbot tools, even as they worry that the technology will gut other parts of their businesses. E-commerce outfits are dreaming of new ways to sell things. Social media platforms are being flooded with posts written by bots. And publishing companies are fretting that even more dollars will be squeezed out of digital advertising.
  • ...22 more annotations...
  • The volatility of chatbots has made it impossible to predict their impact. In one second, the systems impress by fielding a complex request for a five-day itinerary, making Google’s search engine look archaic. A moment later, they disturb by taking conversations in dark directions and launching verbal assaults.
  • The result is an industry gripped with the question: What do we do now?
  • The A.I. systems could disrupt $100 billion in cloud spending, $500 billion in digital advertising and $5.4 trillion in e-commerce sales,
  • As Microsoft figures out a chatbot business model, it is forging ahead with plans to sell the technology to others. It charges $10 a month for a cloud service, built in conjunction with the OpenAI lab, that provides developers with coding suggestions, among other things.
  • Smaller companies like Box need help building chatbot tools, so they are turning to the giants that process, store and manage information across the web. Those companies — Google, Microsoft and Amazon — are in a race to provide businesses with the software and substantial computing power behind their A.I. chatbots.
  • “The cloud computing providers have gone all in on A.I. over the last few months,
  • “They are realizing that in a few years, most of the spending will be on A.I., so it is important for them to make big bets.”
  • Yusuf Mehdi, the head of Bing, said the company was wrestling with how the new version would make money. Advertising will be a major driver, he said, but the company expects fewer ads than traditional search allows.
  • Google, perhaps more than any other company, has reason to both love and hate the chatbots. It has declared a “code red” because their abilities could be a blow to its $162 billion business showing ads on searches.
  • “The discourse on A.I. is rather narrow and focused on text and the chat experience,” Mr. Taylor said. “Our vision for search is about understanding information and all its forms: language, images, video, navigating the real world.”
  • Sridhar Ramaswamy, who led Google’s advertising division from 2013 to 2018, said Microsoft and Google recognized that their current search business might not survive. “The wall of ads and sea of blue links is a thing of the past,” said Mr. Ramaswamy, who now runs Neeva, a subscription-based search engine.
  • As that underlying tech, known as generative A.I., becomes more widely available, it could fuel new ideas in e-commerce. Late last year, Manish Chandra, the chief executive of Poshmark, a popular online secondhand store, found himself daydreaming during a long flight from India about chatbots building profiles of people’s tastes, then recommending and buying clothes or electronics. He imagined grocers instantly fulfilling orders for a recipe.
  • “It becomes your mini-Amazon,” said Mr. Chandra, who has made integrating generative A.I. into Poshmark one of the company’s top priorities over the next three years. “That layer is going to be very powerful and disruptive and start almost a new layer of retail.”
  • In early December, users of Stack Overflow, a popular social network for computer programmers, began posting substandard coding advice written by ChatGPT. Moderators quickly banned A.I.-generated text
  • t people could post this questionable content far faster than they could write posts on their own, said Dennis Soemers, a moderator for the site. “Content generated by ChatGPT looks trustworthy and professional, but often isn’t,”
  • When websites thrived during the pandemic as traffic from Google surged, Nilay Patel, editor in chief of The Verge, a tech news site, warned publishers that the search giant would one day turn off the spigot. He had seen Facebook stop linking out to websites and foresaw Google following suit in a bid to boost its own business.
  • He predicted that visitors from Google would drop from a third of websites’ traffic to nothing. He called that day “Google zero.”
  • Because chatbots replace website search links with footnotes to answers, he said, many publishers are now asking if his prophecy is coming true.
  • , strategists and engineers at the digital advertising company CafeMedia have met twice a week to contemplate a future where A.I. chatbots replace search engines and squeeze web traffic.
  • The group recently discussed what websites should do if chatbots lift information but send fewer visitors. One possible solution would be to encourage CafeMedia’s network of 4,200 websites to insert code that limited A.I. companies from taking content, a practice currently allowed because it contributes to search rankings.
  • Courts are expected to be the ultimate arbiter of content ownership. Last month, Getty Images sued Stability AI, the start-up behind the art generator tool Stable Diffusion, accusing it of unlawfully copying millions of images. The Wall Street Journal has said using its articles to train an A.I. system requires a license.
  • In the meantime, A.I. companies continue collecting information across the web under the “fair use” doctrine, which permits limited use of material without permission.
Javier E

Is our world a simulation? Why some scientists say it's more likely than not | Technolo... - 3 views

  • Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence
  • Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.
  • If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,
  • ...14 more annotations...
  • At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.
  • “Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”
  • “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.
  • If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”
  • Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said
  • “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.
  • “In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,”
  • Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,”
  • That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.
  • “For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,
  • How can the hypothesis be put to the test
  • scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark
  • First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,”
  • it means we will soon have the same ability to create our own simulations. “We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”
Javier E

Opinion | The Apps on My Phone Are Stalking Me - The New York Times - 0 views

  • There is much about the future that keeps me up at night — A.I. weaponry, undetectable viral deepfakes
  • but in the last few years, one technological threat has blipped my fear radar much faster than others.That fear? Ubiquitous surveillance.
  • I am no longer sure that human civilization can undo or evade living under constant, extravagantly detailed physical and even psychic surveillance
  • ...24 more annotations...
  • as a species, we are not doing nearly enough to avoid always being watched or otherwise digitally recorded.
  • our location, your purchases, video and audio from within your home and office, your online searches and every digital wandering, biometric tracking of your face and other body parts, your heart rate and other vital signs, your every communication, recording, and perhaps your deepest thoughts or idlest dreams
  • in the future, if not already, much of this data and more will be collected and analyzed by some combination of governments and corporations, among them a handful of megacompanies whose powers nearly match those of governments
  • Over the last year, as part of Times Opinion’s Privacy Project, I’ve participated in experiments in which my devices were closely monitored in order to determine the kind of data that was being collected about me.
  • I’ve realized how blind we are to the kinds of insights tech companies are gaining about us through our gadgets. Our blindness not only keeps us glued to privacy-invading tech
  • it also means that we’ve failed to create a political culture that is in any way up to the task of limiting surveillance.
  • few of our cultural or political institutions are even much trying to tamp down the surveillance state.
  • Yet the United States and other supposedly liberty-loving Western democracies have not ruled out such a future
  • like Barack Obama before him, Trump and the Justice Department are pushing Apple to create a backdoor into the data on encrypted iPhones — they want the untrustworthy F.B.I. and any local cop to be able to see everything inside anyone’s phone.
  • the fact that both Obama and Trump agreed on the need for breaking iPhone encryption suggests how thoroughly political leaders across a wide spectrum have neglected privacy as a fundamental value worthy of protection.
  • Americans are sleepwalking into a future nearly as frightening as the one the Chinese are constructing. I choose the word “sleepwalking” deliberately, because when it comes to digital privacy, a lot of us prefer the comfortable bliss of ignorance.
  • Among other revelations: Advertising companies and data brokers are keeping insanely close tabs on smartphones’ location data, tracking users so precisely that their databases could arguably compromise national security or political liberty.
  • Tracking technologies have become cheap and widely available — for less than $100, my colleagues were able to identify people walking by surveillance cameras in Bryant Park in Manhattan.
  • The Clearview AI story suggests another reason to worry that our march into surveillance has become inexorable: Each new privacy-invading technology builds on a previous one, allowing for scary outcomes from new integrations and collections of data that few users might have anticipated.
  • The upshot: As the location-tracking apps followed me, I was able to capture the pings they sent to online servers — essentially recording their spying
  • On the map, you can see the apps are essentially stalking me. They see me drive out one morning to the gas station, then to the produce store, then to Safeway; later on I passed by a music school, stopped at a restaurant, then Whole Foods.
  • But location was only one part of the data the companies had about me; because geographic data is often combined with other personal information — including a mobile advertising ID that can help merge what you see and do online with where you go in the real world — the story these companies can tell about me is actually far more detailed than I can tell about myself.
  • I can longer pretend I’ve got nothing to worry about. Sure, I’m not a criminal — but do I want anyone to learn everything about me?
  • more to the point: Is it wise for us to let any entity learn everything about everyone?
  • The remaining uncertainty about the surveillance state is not whether we will submit to it — only how readily and completely, and how thoroughly it will warp our society.
  • Will we allow the government and corporations unrestricted access to every bit of data we ever generate, or will we decide that some kinds of collections, like the encrypted data on your phone, should be forever off limits, even when a judge has issued a warrant for it?
  • In the future, will there be room for any true secret — will society allow any unrecorded thought or communication to evade detection and commercial analysis?
  • How completely will living under surveillance numb creativity and silence radical thought?
  • Can human agency survive the possibility that some companies will know more about all of us than any of us can ever know about ourselves?
‹ Previous 21 - 40 of 185 Next › Last »
Showing 20 items per page