Skip to main content

Home/ TOK Friends/ Group items tagged engine

Rss Feed Group items tagged

Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Technology's Man Problem - NYTimes.com - 0 views

  • computer engineering, the most innovative sector of the economy, remains behind. Many women who want to be engineers encounter a field where they not only are significantly underrepresented but also feel pushed away.
  • Among the women who join the field, 56 percent leave by midcareer, a startling attrition rate that is double that for men, according to research from the Harvard Business School.
  • A culprit, many people in the field say, is a sexist, alpha-male culture that can make women and other people who don’t fit the mold feel unwelcome, demeaned or even endangered.
  • ...12 more annotations...
  • “I’ve been a programmer for 13 years, and I’ve always been one of the only women and queer people in the room. I’ve been harassed, I’ve had people make suggestive comments to me, I’ve had people basically dismiss my expertise. I’ve gotten rape and death threats just for speaking out about this stuff.”
  • “We see these stories, ‘Why aren’t there more women in computer science and engineering?’ and there’s all these complicated answers like, ‘School advisers don’t have them take math and physics,’ and it’s probably true,” said Lauren Weinstein, a man who has spent his four-decade career in tech working mostly with other men, and is currently a consultant for Google.“But I think there’s probably a simpler reason,” he said, “which is these guys are just jerks, and women know it.”
  • once programming gained prestige, women were pushed out. Over the decades, the share of women in computing has continued to decline. In 2012, just 18 percent of computer-science college graduates were women, down from 37 percent in 1985, according to the National Center for Women & Information Technology.
  • Some 1.2 million computing jobs will be available in 2022, yet United States universities are producing only 39 percent of the graduates needed to fill them, the N.C.W.I.T. estimates.
  • an engineer at Pinterest has collected data from people at 133 start-ups and found that an average of 12 percent of the engineers are women.
  • Twenty percent of software developers are women, according to the Labor Department, and fewer than 6 percent of engineers are black or Hispanic. Comparatively, 56 percent of people in business and financial-operations jobs are women, as are 36 percent of physicians and surgeons and one-third of lawyers.
  • “It makes a hostile environment for me,” she said. “But I don’t want to raise my hand and call negative attention toward myself, and become the woman who is the problem — ‘that woman.’ In start-up culture they protect their own tribe, so by putting my hand up, I’m saying I’m an ‘other,’ I shouldn’t be there, so for me that’s an economic threat.”
  • “Many women have come to me and said they basically have had to hide on the Net now,” said Mr. Weinstein, who works on issues of identity and anonymity online. “They use male names, they don’t put their real photos up, because they are immediately targeted and harassed.”
  • “It’s a boys’ club, and you have to try to get into it, and they’re trying as hard as they can to prove you can’t,” said Ephrat Bitton, the director of algorithms at FutureAdvisor, an online investment start-up that she says has a better culture because almost half the engineers are women.
  • Writing code is a high-pressure job with little room for error, as are many jobs. But coding can be stressful in a different way, women interviewed for this article said, because code reviews — peer reviews to spot mistakes in software — can quickly devolve.
  • “Code reviews are brutal — ‘Mine is better than yours, I see flaws in yours’ — and they should be, for the creation of good software,” said Ellen Ullman, a software engineer and author. “I think when you add a drop of women into it, it just exacerbates the problem, because here’s a kind of foreigner.”
  • But some women argue that these kinds of initiatives are unhelpful.“My general issue with the coverage of women in tech is that women in the technology press are talked about in the context of being women, and men are talked about in the context of being in technology,” said a technical woman who would speak only on condition of anonymity because she did not want to be part of an article about women in tech.
martinelligi

It's not just a social media problem - how search engines spread misinformation - St Ge... - 0 views

  • Ad-driven search engines, like social media platforms, are designed to reward clicking on enticing links because it helps the search companies boost their business metrics. As researchers who study the search and recommendation systems, my colleagues and I show that this dangerous combination of corporate profit motive and individual susceptibility makes the problem difficult to fix.
  • It is in the search engine companies’ best interest to give you things that you want to read, watch or simply click. Therefore, as a search engine or any recommendation system creates a list of items to present, it calculates the likelihood that you’ll click on the items.
  • Similar to problematic social media algorithms, search engines learn to serve you what you and others have clicked on before. Because people are drawn to the sensational, this dance between algorithms and human nature can foster the spread of misinformatio
  • ...2 more annotations...
  • Search engine companies, like most online services, make money not only by selling ads but also by tracking users and selling their data through real-time bidding on it. People are often led to misinformation by their desire for sensational and entertaining news as well as information that is either controversial or confirms their views.
  • This pattern of thrilling and unverified stories emerging and people clicking on them continues, with people apparently either being unconcerned with the truth or believing that if a trusted service such as Google Search is showing these stories to them then the stories must be true. More recently, a disproven report claiming China let the coronavirus leak from a lab gained traction on search engines because of this vicious cycle.
runlai_jiang

Union urges Ford to make electric move - BBC News - 0 views

  • The Unite union has urged Ford to convert its UK engine plants to making electric cars and battery technology.Currently Ford makes engines for the Ford Fiesta and Jaguar Land Rover vehicles at Bridgend in South Wales and diesel engines in Dagenham in Essex.
  • Production of engines for Jaguar Land Rover at the Ford Engine plant in Bridgend will end in September 2020.Half of the workforce of 1,930 at Bridgend works on that engine.At the moment Ford Bridgend makes half a million engines a year for Ford's own models and a quarter of a million for JLR.After 2020 the number of engines being made on the site will be just a quarter of present production levels.
  • Figures from industry body the Society of Motor Manufacturers and Traders showed sales of new diesel cars fell 17.1% last year as higher taxes and pollution fears hit demand.
  • ...1 more annotation...
  • A recent study by Aston University forecast a further 10% drop in sales this year and estimated that diesel cars will account for less than a third of the market by 2020.
Javier E

Understanding What's Wrong With Facebook | Talking Points Memo - 0 views

  • to really understand the problem with Facebook we need to understand the structural roots of that problem, how much of it is baked into the core architecture of the site and its very business model
  • much of it is inherent in the core strategies of the post-2000, second wave Internet tech companies that now dominate our information space and economy.
  • Facebook is an ingenious engine for information and ideational manipulation.
  • ...17 more annotations...
  • Good old fashioned advertising does that to a degree. But Facebook is much more powerful, adaptive and efficient.
  • Facebook is designed to do specific things. It’s an engine to understand people’s minds and then manipulate their thinking.
  • Those tools are refined for revenue making but can be used for many other purposes. That makes it ripe for misuse and bad acting.
  • The core of all second wave Internet commerce operations was finding network models where costs grow mathematically and revenues grow exponentially.
  • The network and its dominance is the product and once it takes hold the cost inputs remained constrained while the revenues grow almost without limit.
  • Facebook is best understood as a fantastically profitable nuclear energy company whose profitability is based on dumping the waste on the side of the road and accepting frequent accidents and explosions as inherent to the enterprise.
  • That’s why these companies employ so few people relative to scale and profitability.
  • That’s why there’s no phone support for Google or Facebook or Twitter. If half the people on the planet are ‘customers’ or users that’s not remotely possible.
  • The core economic model requires doing all of it on the cheap. Indeed, what Zuckerberg et al. have created with Facebook is so vast that the money required not to do it on the cheap almost defies imagination.
  • Facebook’s core model and concept requires not taking responsibility for what others do with the engine created to drive revenue.
  • It all amounts to a grand exercise in socializing the externalities and keeping all the revenues for the owners.
  • Here’s a way to think about it. Nuclear power is actually incredibly cheap. The fuel is fairly plentiful and easy to pull out of the ground. You set up a little engine and it generates energy almost without limit. What makes it ruinously expensive is managing the externalities – all the risks and dangers, the radiation, accidents, the constant production of radioactive waste.
  • managing or distinguishing between legitimate and bad-acting uses of the powerful Facebook engine is one that would require huge, huge investments of money and armies of workers to manage
  • But back to Facebook. The point is that they’ve created a hugely powerful and potentially very dangerous machine
  • The core business model is based on harvesting the profits from the commercial uses of the machine and using algorithms and very, very limited personnel (relative to scale) to try to get a handle on the most outrageous and shocking abuses which the engine makes possible.
  • Zuckerberg may be a jerk and there really is a culture of bad acting within the organization. But it’s not about him being a jerk. Replace him and his team with non-jerks and you’d still have a similar core problem.
  • To manage the potential negative externalities, to take some responsibility for all the dangerous uses the engine makes possible would require money the owners are totally unwilling and in some ways are unable to spend.
Javier E

How YouTube Drives People to the Internet's Darkest Corners - WSJ - 0 views

  • YouTube is the new television, with more than 1.5 billion users, and videos the site recommends have the power to influence viewpoints around the world.
  • Those recommendations often present divisive, misleading or false content despite changes the site has recently made to highlight more-neutral fare, a Wall Street Journal investigation found.
  • Behind that growth is an algorithm that creates personalized playlists. YouTube says these recommendations drive more than 70% of its viewing time, making the algorithm among the single biggest deciders of what people watch.
  • ...25 more annotations...
  • People cumulatively watch more than a billion YouTube hours daily world-wide, a 10-fold increase from 2012
  • After the Journal this week provided examples of how the site still promotes deceptive and divisive videos, YouTube executives said the recommendations were a problem.
  • When users show a political bias in what they choose to view, YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints.
  • Such recommendations play into concerns about how social-media sites can amplify extremist voices, sow misinformation and isolate users in “filter bubbles”
  • Unlike Facebook Inc. and Twitter Inc. sites, where users see content from accounts they choose to follow, YouTube takes an active role in pushing information to users they likely wouldn’t have otherwise seen.
  • “The editorial policy of these new platforms is to essentially not have one,”
  • “That sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win.’ But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and people are being gamed.”
  • YouTube has been tweaking its algorithm since last autumn to surface what its executives call “more authoritative” news source
  • YouTube last week said it is considering a design change to promote relevant information from credible news sources alongside videos that push conspiracy theories.
  • The Journal investigation found YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content.
  • YouTube engineered its algorithm several years ago to make the site “sticky”—to recommend videos that keep users staying to watch still more, said current and former YouTube engineers who helped build it. The site earns money selling ads that run before and during videos.
  • YouTube’s algorithm tweaks don’t appear to have changed how YouTube recommends videos on its home page. On the home page, the algorithm provides a personalized feed for each logged-in user largely based on what the user has watched.
  • There is another way to calculate recommendations, demonstrated by YouTube’s parent, Alphabet Inc.’s Google. It has designed its search-engine algorithms to recommend sources that are authoritative, not just popular.
  • Google spokeswoman Crystal Dahlen said that Google improved its algorithm last year “to surface more authoritative content, to help prevent the spread of blatantly misleading, low-quality, offensive or downright false information,” adding that it is “working with the YouTube team to help share learnings.”
  • In recent weeks, it has expanded that change to other news-related queries. Since then, the Journal’s tests show, news searches in YouTube return fewer videos from highly partisan channels.
  • YouTube’s recommendations became even more effective at keeping people on the site in 2016, when the company began employing an artificial-intelligence technique called a deep neural network that makes connections between videos that humans wouldn’t. The algorithm uses hundreds of signals, YouTube says, but the most important remains what a given user has watched.
  • Using a deep neural network makes the recommendations more of a black box to engineers than previous techniques,
  • “We don’t have to think as much,” he said. “We’ll just give it some raw data and let it figure it out.”
  • To better understand the algorithm, the Journal enlisted former YouTube engineer Guillaume Chaslot, who worked on its recommendation engine, to analyze thousands of YouTube’s recommendations on the most popular news-related queries
  • Mr. Chaslot created a computer program that simulates the “rabbit hole” users often descend into when surfing the site. In the Journal study, the program collected the top five results to a given search. Next, it gathered the top three recommendations that YouTube promoted once the program clicked on each of those results. Then it gathered the top three recommendations for each of those promoted videos, continuing four clicks from the original search.
  • The first analysis, of November’s top search terms, showed YouTube frequently led users to divisive and misleading videos. On the 21 news-related searches left after eliminating queries about entertainment, sports and gaming—such as “Trump,” “North Korea” and “bitcoin”—YouTube most frequently recommended these videos:
  • The algorithm doesn’t seek out extreme videos, they said, but looks for clips that data show are already drawing high traffic and keeping people on the site. Those videos often tend to be sensationalist and on the extreme fringe, the engineers said.
  • Repeated tests by the Journal as recently as this week showed the home page often fed far-right or far-left videos to users who watched relatively mainstream news sources, such as Fox News and MSNBC.
  • Searching some topics and then returning to the home page without doing a new search can produce recommendations that push users toward conspiracy theories even if they seek out just mainstream sources.
  • After searching for “9/11” last month, then clicking on a single CNN clip about the attacks, and then returning to the home page, the fifth and sixth recommended videos were about claims the U.S. government carried out the attacks. One, titled “Footage Shows Military Plane hitting WTC Tower on 9/11—13 Witnesses React”—had 5.3 million views.
katherineharron

The US was once the uncontested world leader in science and engineering. That's changed... - 0 views

  • he United States was once the dominant, global leader in science and engineering, but that ranking has dropped as other countries invest in research and development, according to a new report.
  • The findings were presented this week in the State of US Science and Engineering 2020 report, compiled and published by the National Science Board and the National Science Foundation. The report is published every two years and submitted to Congress."Our latest report shows the continued spread of [science and engineering] capacity across the globe, which is good for humanity because science is not a zero-sum game," said Diane Souvaine, National Science Board chair, in a statement. "However, it also means that where once the US was the uncontested leader in S&E, we now are playing a less-dominant role in many areas."
  • "Federal support of basic research drives innovation. Only the federal government can make a strategic, long-term commitment to creating new knowledge that [could] to lead to new or improved technologies, goods or services," said Julia Phillips, chair of the National Science Board's science and engineering policy committee. "Basic research is the 'seed corn' of our US S&E enterprise, a global competitive advantage, and the starting point for much of our GDP growth since World War II."
  • ...2 more annotations...
  • "The United States has seen its relative share of global S&T [science and technology] activity flatten or shrink, even as its absolute activity levels kept rising," the authors wrote in the report. "As more countries around the world develop R&D and human capital infrastructure to sustain and compete in a knowledge-oriented economy, the United States is playing a less dominant role in many areas of S&E [science and engineering] activity."
  • "Research is now a truly global enterprise. Opportunities are everywhere and humanity's collective knowledge is growing exponentially," Souvaine said. "To remain a leader, we need to tap into our American 'can do' spirit and recommit to strong partnerships among government, universities and industry that have been the hallmarks of our success. I believe we should react with excitement, not fear, because we are well positioned to compete, collaborate and thrive."
Javier E

Google Devising Radical Search Changes to Beat Back AI Rivals - The New York Times - 0 views

  • Google’s employees were shocked when they learned in March that the South Korean consumer electronics giant Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.
  • Google’s reaction to the Samsung threat was “panic,” according to internal messages reviewed by The New York Times. An estimated $3 billion in annual revenue was at stake with the Samsung contract. An additional $20 billion is tied to a similar Apple contract that will be up for renewal this year.
  • A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.
  • ...14 more annotations...
  • The Samsung threat represented the first potential crack in Google’s seemingly impregnable search business, which was worth $162 billion last year.
  • Modernizing its search engine has become an obsession at Google, and the planned changes could put new A.I. technology in phones and homes all over the world.
  • Google has been worried about A.I.-powered competitors since OpenAI, a San Francisco start-up that is working with Microsoft, demonstrated a chatbot called ChatGPT in November. About two weeks later, Google created a task force in its search division to start building A.I. products,
  • Google has been doing A.I. research for years. Its DeepMind lab in London is considered one of the best A.I. research centers in the world, and the company has been a pioneer with A.I. projects, such as self-driving cars and the so-called large language models that are used in the development of chatbots. In recent years, Google has used large language models to improve the quality of its search results, but held off on fully adopting A.I. because it has been prone to generating false and biased statements.
  • Now the priority is winning control of the industry’s next big thing. Last month, Google released its own chatbot, Bard, but the technology received mixed reviews.
  • The system would learn what users want to know based on what they’re searching when they begin using it. And it would offer lists of preselected options for objects to buy, information to research and other information. It would also be more conversational — a bit like chatting with a helpful person.
  • Magi would keep ads in the mix of search results. Search queries that could lead to a financial transaction, such as buying shoes or booking a flight, for example, would still feature ads on their results pages.
  • Last week, Google invited some employees to test Magi’s features, and it has encouraged them to ask the search engine follow-up questions to judge its ability to hold a conversation. Google is expected to release the tools to the public next month and add more features in the fall, according to the planning document.
  • The company plans to initially release the features to a maximum of one million people. That number should progressively increase to 30 million by the end of the year. The features will be available exclusively in the United States.
  • Google has also explored efforts to let people use Google Earth’s mapping technology with help from A.I. and search for music through a conversation with a chatbot
  • A tool called GIFI would use A.I. to generate images in Google Image results.
  • Tivoli Tutor, would teach users a new language through open-ended A.I. text conversations.
  • Yet another product, Searchalong, would let users ask a chatbot questions while surfing the web through Google’s Chrome browser. People might ask the chatbot for activities near an Airbnb rental, for example, and the A.I. would scan the page and the rest of the internet for a response.
  • “If we are the leading search engine and this is a new attribute, a new feature, a new characteristic of search engines, we want to make sure that we’re in this race as well,”
Javier E

Computer Algorithms Rely Increasingly on Human Helpers - NYTimes.com - 0 views

  • Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.
  • And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved. People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.
  • Even at Google, where algorithms and engineers reign supreme in the company’s business and culture, the human contribution to search results is increasing. Google uses human helpers in two ways. Several months ago, it began presenting summaries of information on the right side of a search page when a user typed in the name of a well-known person or place, like “Barack Obama” or “New York City.” These summaries draw from databases of knowledge like Wikipedia, the C.I.A. World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.
  • ...3 more annotations...
  • When Google’s algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to Web pages. “There has been a shift in our thinking,” said Scott Huffman, an engineering director in charge of search quality at Google. “A part of our resources are now more human curated.”
  • “Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement,” Mr. Huffman said.
  • Ben Taylor, 25, is a product manager at FindTheBest, a fast-growing start-up in Santa Barbara, Calif. The company calls itself a “comparison engine” for finding and comparing more than 100 topics and products, from universities to nursing homes, smartphones to dog breeds. Its Web site went up in 2010, and the company now has 60 full-time employees. Mr. Taylor helps design and edit the site’s education pages. He is not an engineer, but an English major who has become a self-taught expert in the arcane data found in Education Department studies and elsewhere. His research methods include talking to and e-mailing educators. He is an information sleuth.
Emilio Ergueta

Human engineering | New Philosopher - 0 views

  • You know the situation is getting desperate when three bioethicists propose genetically modifying humans to reduce our environmental impact. In a bizarre paper titled ‘Human engineering and climate change’, Matthew Liao, Anders Sandberg and Rebecca Roache argue we should seriously consider technologies to engineer human bodies to reduce carbon emissions.
  • The paper, to be published in a respectable journal, is beyond satire and its only likely effect is to bring the philosophy profession into disrepute. Philosophy, it seems, does not have a ‘laugh test’ for filtering out whacky proposals. So why stop at cat’s eyes and midget babies? Why not genetically modify people to make them white in order to cool the Earth by increasing its reflectivity?
  • The three bioethicists suggest that people who are appalled at the idea of human engineering may have a “status quo bias”, resisting their innovative ideas because of an inherent conservatism.
  • ...2 more annotations...
  • But perhaps the paper by Liao, Sandberg and Roache will turn out to be a prank played on the journal, like the Sokal hoax, named after the physicist whose paper deploying post-modern gobbledegook to show that “quantum gravity is a social and linguistic construct” was published in a cultural studies journal.
  • It’s easy to imagine academics sitting around swapping the most outrageous solutions to climate change and then daring one another to have them published. I hope this will turn out to be the case. In the meantime I cringe at the thought of what the long-dead giants of Western philosophy would make of their discipline’s response to the climate crisis.
anniina03

Boeing's Mission for NASA Gets Cut Short - The Atlantic - 0 views

  • It was a picture-perfect launch, just before sunrise on a sandy coastline. The rocket, bright as a candle flame, climbed steadily, leaving a spindly trail of smoke that split the sky in half, with the sharp darkness of night on one side and the first pastel hues of daylight on the other. It carried a capsule, bound for the International Space Station, to the edge of space and let go.
  • The trouble started after that. The capsule, built for NASA by Boeing, was supposed to ignite its own engines to boost itself higher into orbit, where it would chase after the space station. But the engines didn’t start when they should have.
  • But the craft was flying just out of reach of communication, between two satellites. When engineers could finally contact Starliner, they made the spacecraft thrust itself higher, but it was too late. The confused capsule had been burning fuel to maintain its position, and didn’t have enough left to execute that crucial push toward the ISS.
  • ...7 more annotations...
  • The capsule, named CST-100 Starliner, is part of a NASA program called Commercial Crew, an effort to launch astronauts to the ISS from the U.S. The agency has not had that capability since the space-shuttle program ended in 2011 under the weight of cost, safety, and political factors.
  • According to NASA officials, after Starliner separated from the rocket, the capsule missed the moment it needed to ignite its engines for a carefully timed and fully automated process known as an orbital insertion burn. Without that step, the spacecraft couldn’t fire the thrusters to shove itself into the correct orbit.
  • Engineers watched, unable to help from below, as the spacecraft became disoriented.
  • Jim Chilton, the senior vice president of Boeing Space and Launch, said engineers don’t know why the clock went off track. Nicole Mann, who would have made her first trip to space on the next mission, said that the astronauts “train extensively for this type of contingency, and had we been on board, there could have been actions that we could have taken,” such as manually controlling the spacecraft.
  • Now the future of the program is uncertain, particularly for Boeing. It’s unclear what additional tests NASA might now require from Boeing before letting astronauts fly, including, perhaps, another attempt at an uncrewed mission. The capsule’s failure will certainly reshuffle schedules and could contribute to further delays for Boeing, already under scrutiny for its sluggishness in a recent report from NASA’s inspector general.
  • The spacecraft failure means yet another cycle of bad news for Boeing. The company has a long history with NASA; it was the prime contractor for the ISS and also worked on the space shuttles. But the company is better known for its airplanes, and in recent months the flaws of its 737 Max, which contributed to two deadly crashes, have put Boeing under intense pressure to prove that its aircraft are safe.
  • In the coming days, NASA and Boeing teams will review data from Starliner’s short-lived mission. And the space agency will continue its negotiations to buy more seats on the Soyuz, Russia’s transportation system, which can cost as much as $86 million. NASA’s last trip on the Soyuz system is scheduled for April. If neither company’s crew capsule is ready by then, NASA will have to buy more slots to ensure that American astronauts can launch to space.
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

Google Alters Search to Handle More Complex Queries - NYTimes.com - 0 views

  • Google on Thursday announced one of the biggest changes to its search engine, a rewriting of its algorithm to handle more complex queries that affects 90 percent of all searches.
  • The company made the changes, executives said, because Google users are asking increasingly long and complex questions and are searching Google more often on mobile phones with voice search.
  • “They said, ‘Let’s go back and basically replace the engine of a 1950s car,’ ” said Danny Sullivan, founding editor of Search Engine Land, an industry blog. “It’s fair to say the general public seemed not to have noticed that Google ripped out its engine while driving down the road and replaced it with something else.
  • ...3 more annotations...
  • Google originally matched keywords in a search query to the same words on Web pages. Hummingbird is the culmination of a shift to understanding the meaning of phrases in a query and displaying Web pages that more accurately match that meaning
  • The algorithm also builds on work Google has done to understand conversational language, like interpreting what pronouns in a search query refer to. Hummingbird extends that to all Web searches, not just results related to entities included in the Knowledge Graph. It tries to connect phrases and understand concepts in a long query.
  • The outcome is not a change in how Google searches the Web, but in the results that it shows. Unlike some of its other algorithm changes, including one that pushed down so-called content farms in search results, Hummingbird is unlikely to noticeably affect certain categories of Web businesses, Mr. Sullivan said. Instead, Google says it believes that users will see more precise results
Javier E

Humans, Version 3.0 § SEEDMAGAZINE.COM - 0 views

  • Where are we humans going, as a species? If science fiction is any guide, we will genetically evolve like in X-Men, become genetically engineered as in Gattaca, or become cybernetically enhanced like General Grievous in Star Wars.
  • There is, however, another avenue for human evolution, one mostly unappreciated in both science and fiction. It is this unheralded mechanism that will usher in the next stage of human, giving future people exquisite powers we do not currently possess, powers worthy of natural selection itself. And, importantly, it doesn’t require us to transform into cyborgs or bio-engineered lab rats. It merely relies on our natural bodies and brains functioning as they have for millions of years. This mystery mechanism of human transformation is neuronal recycling, coined by neuroscientist Stanislas Dehaene, wherein the brain’s innate capabilities are harnessed for altogether novel functions.
  • The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances.
  • ...4 more annotations...
  • Like all animal brains, human brains are not general-purpose universal learning machines, but, instead, are intricately structured suites of instincts optimized for the environments in which they evolved. To harness our brains, we want to let the brain’s brilliant mechanisms run as intended—i.e., not to be twisted. Rather, the strategy is to twist Y into a shape that the brain does know how to process.
  • there is a very good reason to be optimistic that the next stage of human will come via the form of adaptive harnessing, rather than direct technological enhancement: It has already happened. We have already been transformed via harnessing beyond what we once were. We’re already Human 2.0, not the Human 1.0, or Homo sapiens, that natural selection made us. We Human 2.0’s have, among many powers, three that are central to who we take ourselves to be today: writing, speech, and music (the latter perhaps being the pinnacle of the arts). Yet these three capabilities, despite having all the hallmarks of design, were not a result of natural selection, nor were they the result of genetic engineering or cybernetic enhancement to our brains. Instead, and as I argue in both The Vision Revolution and my forthcoming Harnessed, these are powers we acquired by virtue of harnessing, or neuronal recycling.
  • Although the step from Human 1.0 to 2.0 was via cultural selection, not via explicit human designers, does the transformation to Human 3.0 need to be entirely due to a process like cultural evolution, or might we have any hope of purposely guiding our transformation? When considering our future, that’s probably the most relevant question we should be asking ourselves.
  • One of my reasons for optimism is that nature-harnessing technologies (like writing, speech, and music) must mimic fundamental ecological features in nature, and that is a much easier task for scientists to tackle than emulating the exhorbitantly complex mechanisms of the brain
Javier E

Dealing With an Identity Hijacked on the Online Highway - NYTimes.com - 0 views

  • his predicament stands as a chilling example of what it means to be at the mercy of the Google algorithm.
  • The question is best directed at the search engines. And Google’s defense — that the behavior of its ever-improving algorithm should be considered independent of the results it produces in a particular controversial case — has a particularly patronizing air, especially when it comes to hurting living, breathing people.
  • it was the algorithm that took the hit, and washed away accountability.
  • ...1 more annotation...
  • “When a company is filled with engineers, it turns to engineering to solve problems,” he wrote candidly. “Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data.”
Javier E

Clive Thompson on Memory Engineering | Magazine - 0 views

  • a new trend I call memory engineering — the process of fashioning our inchoate digital pasts into useful memories.
  • Many of us generate massive amounts of personal data every day — phonecam pictures, text messages, status updates, and so on. By default, all of us are becoming lifeloggers. But we almost never go back and look at this stuff, because it’s too hard to parse.
  • Memory engineers are solving that problem by creating services that reformat that data in witty, often artistic ways.
  • ...2 more annotations...
  • Lifeloggers have long touted the “total recall” that’s achievable if you obsessively store and organize personal records: Never forget a thing! But Wegener has found that less can be more. When you show someone their year-old check-ins and nothing else, it’s a very crude signal — just a bunch of points on a map. But our brains seize these cues and fill in the details
  • these techniques can also work with “semantic” memories of facts and info. Last winter, Amazon released a clever app called Daily Review, which takes your Kindle clippings and redisplays them for you weeks or months later — timed on a schedule that’s designed to help you absorb your reading more deeply into your brain
Javier E

Why Science Majors Change Their Minds (It's Just So Darn Hard) - NYTimes.com - 1 views

  • roughly 40 percent of students planning engineering and science majors end up switching to other subjects or failing to get any degree. That increases to as much as 60 percent when pre-medical students, who typically have the strongest SAT scores and high school science preparation, are included
  • the attrition rate can be higher at the most selective schools, where he believes the competition overwhelms even well-qualified students.
  • the main majors are difficult and growing more complex. Some students still lack math preparation or aren’t willing to work hard enough.
  • ...4 more annotations...
  • there could be more subtle problems at work, like the proliferation of grade inflation in the humanities and social sciences, which provides another incentive for students to leave STEM majors. It is no surprise that grades are lower in math and science, where the answers are clear-cut and there are no bonus points for flair. Professors also say they are strict because science and engineering courses build on one another, and a student who fails to absorb the key lessons in one class will flounder in the next.
  • The National Science Board, a public advisory body, warned in the mid-1980s that students were losing sight of why they wanted to be scientists and engineers in the first place. Research confirmed in the 1990s that students learn more by grappling with open-ended problems, like creating a computer game or designing an alternative energy system, than listening to lectures. While the National Science Foundation went on to finance pilot courses that employed interactive projects, when the money dried up, so did most of the courses. Lecture classes are far cheaper to produce, and top professors are focused on bringing in research grants, not teaching undergraduates.
  • Since becoming Notre Dame’s dean in 2008, Dr. Kilpatrick has revamped and expanded a freshman design course that had gotten “a little bit stale.” The students now do four projects. They build Lego robots and design bridges capable of carrying heavy loads at minimal cost. They also create electronic circuit boards and dream up a project of their own.
  • Some new students do not have a good feel for how deeply technical engineering is. Other bright students may have breezed through high school without developing disciplined habits. By contrast, students in China and India focus relentlessly on math and science from an early age.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
cvanderloo

NASA fires SLS moon rocket in major test, but engines shut down early - 0 views

  • the test in Mississippi was cut short after a malfunction caused an automatic abort.
  • "We did get an MCF on engine four," a control room member said less than a minute into the test fire, using an initialism that stands for "major component malfunction."
  • The test was meant to last eight minutes – the full duration needed for the booster during its Artemis program liftoff – but only ran less than two minutes.
  • ...2 more annotations...
  • An exact plan moving forward, which could mean a second test and delay before transport to Florida, had not yet been released by Saturday evening.
  • NASA said the RS-25 engines have been upgraded and refurbished since their last missions and the agency has placed an order with Aerojet Rocketdyne, recently acquired by Northrop Grumman, for 24 brand new versions.
Javier E

The Lasting Lessons of John Conway's Game of Life - The New York Times - 0 views

  • “Because of its analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called ‘simulation games,’” Mr. Gardner wrote when he introduced Life to the world 50 years ago with his October 1970 column.
  • The Game of Life motivated the use of cellular automata in the rich field of complexity science, with simulations modeling everything from ants to traffic, clouds to galaxies. More trivially, the game attracted a cult of “Lifenthusiasts,” programmers who spent a lot of time hacking Life — that is, constructing patterns in hopes of spotting new Life-forms.
  • The tree of Life also includes oscillators, such as the blinker, and spaceships of various sizes (the glider being the smallest).
  • ...24 more annotations...
  • Patterns that didn’t change one generation to the next, Dr. Conway called still lifes — such as the four-celled block, the six-celled beehive or the eight-celled pond. Patterns that took a long time to stabilize, he called methuselahs.
  • The second thing Life shows us is something that Darwin hit upon when he was looking at Life, the organic version. Complexity arises from simplicity!
  • I first encountered Life at the Exploratorium in San Francisco in 1978. I was hooked immediately by the thing that has always hooked me — watching complexity arise out of simplicity.
  • Life shows you two things. The first is sensitivity to initial conditions. A tiny change in the rules can produce a huge difference in the output, ranging from complete destruction (no dots) through stasis (a frozen pattern) to patterns that keep changing as they unfold.
  • Life shows us complex virtual “organisms” arising out of the interaction of a few simple rules — so goodbye “Intelligent Design.”
  • I’ve wondered for decades what one could learn from all that Life hacking. I recently realized it’s a great place to try to develop “meta-engineering” — to see if there are general principles that govern the advance of engineering and help us predict the overall future trajectory of technology.
  • Melanie Mitchell— Professor of complexity, Santa Fe Institute
  • Given that Conway’s proof that the Game of Life can be made to simulate a Universal Computer — that is, it could be “programmed” to carry out any computation that a traditional computer can do — the extremely simple rules can give rise to the most complex and most unpredictable behavior possible. This means that there are certain properties of the Game of Life that can never be predicted, even in principle!
  • I use the Game of Life to make vivid for my students the ideas of determinism, higher-order patterns and information. One of its great features is that nothing is hidden; there are no black boxes in Life, so you know from the outset that anything that you can get to happen in the Life world is completely unmysterious and explicable in terms of a very large number of simple steps by small items.
  • In Thomas Pynchon’s novel “Gravity’s Rainbow,” a character says, “But you had taken on a greater and more harmful illusion. The illusion of control. That A could do B. But that was false. Completely. No one can do. Things only happen.”This is compelling but wrong, and Life is a great way of showing this.
  • In Life, we might say, things only happen at the pixel level; nothing controls anything, nothing does anything. But that doesn’t mean that there is no such thing as action, as control; it means that these are higher-level phenomena composed (entirely, with no magic) from things that only happen.
  • Stephen Wolfram— Scientist and C.E.O., Wolfram Research
  • Brian Eno— Musician, London
  • Bert Chan— Artificial-life researcher and creator of the continuous cellular automaton “Lenia,” Hong Kong
  • it did have a big impact on beginner programmers, like me in the 90s, giving them a sense of wonder and a kind of confidence that some easy-to-code math models can produce complex and beautiful results. It’s like a starter kit for future software engineers and hackers, together with Mandelbrot Set, Lorenz Attractor, et cetera.
  • if we think about our everyday life, about corporations and governments, the cultural and technical infrastructures humans built for thousands of years, they are not unlike the incredible machines that are engineered in Life.
  • In normal times, they are stable and we can keep building stuff one component upon another, but in harder times like this pandemic or a new Cold War, we need something that is more resilient and can prepare for the unpreparable. That would need changes in our “rules of life,” which we take for granted.
  • Rudy Rucker— Mathematician and author of “Ware Tetralogy,” Los Gatos, Calif.
  • That’s what chaos is about. The Game of Life, or a kinky dynamical system like a pair of pendulums, or a candle flame, or an ocean wave, or the growth of a plant — they aren’t readily predictable. But they are not random. They do obey laws, and there are certain kinds of patterns — chaotic attractors — that they tend to produce. But again, unpredictable is not random. An important and subtle distinction which changed my whole view of the world.
  • William Poundstone— Author of “The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge,” Los Angeles, Calif.
  • The Game of Life’s pulsing, pyrotechnic constellations are classic examples of emergent phenomena, introduced decades before that adjective became a buzzword.
  • Fifty years later, the misfortunes of 2020 are the stuff of memes. The biggest challenges facing us today are emergent: viruses leaping from species to species; the abrupt onset of wildfires and tropical storms as a consequence of a small rise in temperature; economies in which billions of free transactions lead to staggering concentrations of wealth; an internet that becomes more fraught with hazard each year
  • Looming behind it all is our collective vision of an artificial intelligence-fueled future that is certain to come with surprises, not all of them pleasant.
  • The name Conway chose — the Game of Life — frames his invention as a metaphor. But I’m not sure that even he anticipated how relevant Life would become, and that in 50 years we’d all be playing an emergent game of life and death.
1 - 20 of 233 Next › Last »
Showing 20 items per page