Skip to main content

Home/ TOK Friends/ Group items tagged artificial intelligence

Rss Feed Group items tagged

Emilio Ergueta

Minds and Computers: An Introduction to AI by Matt Carter | Issue 68 | Philosophy Now - 0 views

  • his main concern is to outline and defend the possibility of a computational theory of mind.
  • there can be systems which display (and so have) mentality simply in virtue of instantiating certain computer programs – but that on the other hand, our best available programs are ‘woefully inadequate’ to that task.
  • For students of artificial intelligence (AI), the book explains very clearly why the whole artificial intelligence project presupposes substantive and controversial answers to some traditional philosophical questions.
  • ...3 more annotations...
  • One central problem for artificial intelligence is how to get aboutness into computer programs – how to get semantics out of syntactics.
  • Visual experience is beyond merely having certain physical inputs in the forms of light waves, undergoing certain transformations in the brain and producing physical outputs such as speaking the sentence “There is something red.”
  • He needs to explain how he thinks a computational account can be provided of qualia; or he needs to abandon a qualia-based account of experience, in favour of some computational account; or he needs to abandon his conclusion that there is no objection in principle to a purely computational account of the mind.
johnsonel7

Musicians Using AI to Create Otherwise Impossible New Songs | Time.com - 0 views

  • n November, the musician Grimes made a bold prediction. “I feel like we’re in the end of art, human art,” she said on Sean Carroll's Mindscape podcast. “Once there’s actually AGI (Artificial General Intelligence), they’re gonna be so much better at making art than us.”
  • Artificial intelligence has already upended many blue collar jobs across various industries; the possibility that music, a deeply personal and subjective form, could also be optimized was enough to cause widespread alarm.
  • While obstacles like copyright complications and other hurdles have yet to be worked out, musicians working with AI hope that the technology will become a democratizing force and an essential part of everyday musical creation.
  • ...3 more annotations...
  • Stavitsky realized that while people are increasingly plugging into headphones to get them through the day, “there’s no playlist or song that can adapt to the context of whatever’s happening around you," he says. His app takes several real-time factors into account — including the weather, the listener's heart rate, physical activity rate, and circadian rhythms — in generating gentle music that’s designed to help people sleep, study or relax.
  • “AI forced us to come up against patterns that have no relationship to comfort. It gave us the skills to break out of our own habits,” she says. The project resulted in the first Grammy nomination of YACHT’s two-decade career, for best immersive audio album.
  • . “There’s something freeing about not having to make every single microdecision, but rather, creating an ecosystem where things tend to happen, but never in the order you were imagining them,” she says. “It opens up a world of possibilities.” She says that she has a few new music projects coming this year using Bronze’s technology.
anniina03

A.I. Comes to the Operating Room - The New York Times - 0 views

  • Brain surgeons are bringing artificial intelligence and new imaging techniques into the operating room, to diagnose tumors as accurately as pathologists, and much faster, according to a report in the journal Nature Medicine.
  • The traditional method, which requires sending the tissue to a lab, freezing and staining it, then peering at it through a microscope, takes 20 to 30 minutes or longer. The new technique takes two and a half minutes.
  • In addition to speeding up the process, the new technique can also detect some details that traditional methods may miss, like the spread of a tumor along nerve fibers
  • ...6 more annotations...
  • The new process may also help in other procedures where doctors need to analyze tissue while they are still operating, such as head and neck, breast, skin and gynecologic surgery, the report said. It also noted that there is a shortage of neuropathologists, and suggested that the new technology might help fill the gap in medical centers that lack the specialty. Video Advertisement LIVE 00:00 1:05
  • Algorithms are also being developed to help detect lung cancers on CT scans, diagnose eye disease in people with diabetes and find cancer on microscope slides.
  • The diagnoses were later judged right or wrong based on whether they agreed with the findings of lengthier and more extensive tests performed after the surgery.The result was a draw: humans, 93.9 percent correct; A.I., 94.6 percent.
  • At some centers, he said, brain surgeons do not even order frozen sections because they do not trust them and prefer to wait for tissue processing after the surgery, which may take weeks to complete.
  • Some types of brain tumor are so rare that there is not enough data on them to train an A.I. system, so the system in the study was designed to essentially toss out samples it could not identify.
  • “It won’t change brain surgery,” he said, “but it’s going to add a significant new tool, more significant than they’ve stated.”
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • The Reformers
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
margogramiak

How To Fight Deforestation In The Amazon From Your Couch | HuffPost - 0 views

  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
  • Some 15% of the Amazon, the world’s largest rainforest and a crucial carbon repository, has been cut or burned down. Around two-thirds of the Amazon lie within Brazil’s borders, where almost 157 square miles of forest were cleared in April alone. In addition to storing billions of tons of carbon, the Amazon is home to tens of millions of people and some 10% of the Earth’s biodiversity.
    • margogramiak
       
      all horrifying stats.
  • you just have to be a citizen that is concerned about the issue of deforestation,
    • margogramiak
       
      that's me!
  • ...12 more annotations...
  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
    • margogramiak
       
      great!
  • to build an artificial intelligence model that can recognize signs of deforestation. That data can be used to alert governments and conservation organizations where intervention is needed and to inform policies that protect vital ecosystems. It may even one day predict where deforestation is likely to happen next.
    • margogramiak
       
      That sounds super cool, and definitely useful.
  • To monitor deforestation, conservation organizations need an eye in the sky.
    • margogramiak
       
      bird's eye view pictures of deforestation are always super impactful.
  • WRI’s Global Forest Watch online tracking system receives images of the world’s forests taken every few days by NASA satellites. A simple computer algorithm scans the images, flagging instances where before there were trees and now there are not. But slight disturbances, such as clouds, can trip up the computer, so experts are increasingly interested in using artificial intelligence.
    • margogramiak
       
      that's so cool.
  • Inman was surprised how willing people have been to spend their time clicking on abstract-looking pictures of the Amazon.
    • margogramiak
       
      I'm glad so many people want to help.
  • Look at these nine blocks and make a judgment about each one. Does that satellite image look like a situation where human beings have transformed the landscape in some way?” Inman explained.
    • margogramiak
       
      seems simple enough
  • It’s not always easy; that’s the point. For example, a brown patch in the trees could be the result of burning to clear land for agriculture (earning a check mark for human impact), or it could be the result of a natural forest fire (no check mark). Keen users might be able to spot subtle signs of intervention the computer would miss, like the thin yellow line of a dirt road running through the clearing. 
    • margogramiak
       
      I was thinking about this issue... that's a hard problem to solve.
  • SAS’s website offers a handful of examples comparing natural forest features and manmade changes. 
    • margogramiak
       
      I guess that would be helpful. What happens if someone messes up though?
  • users have analyzed almost 41,000 images, covering an area of rainforest nearly the size of the state of Montana. Deforestation caused by human activity is evident in almost 2 in 5 photos.
    • margogramiak
       
      wow.
  • The researchers hope to use historical images of these new geographies to create a predictive model that could identify areas most at risk of future deforestation. If they can show that their AI model is successful, it could be useful for NGOs, governments and forest monitoring bodies, enabling them to carefully track forest changes and respond by sending park rangers and conservation teams to threatened areas. In the meantime, it’s a great educational tool for the citizen scientists who use the app
    • margogramiak
       
      But then what do they do with this data? How do they use it to make a difference?
  • Users simply select the squares in which they’ve spotted some indication of human impact: the tell-tale quilt of farm plots, a highway, a suspiciously straight edge of tree line. 
    • margogramiak
       
      I could do that!
  • we have still had people from 80 different countries come onto the app and make literally hundreds of judgments that enabled us to resolve 40,000 images,
    • margogramiak
       
      I like how in a sense it makes all the users one big community because of their common goal of wanting to help the earth.
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • There will be no place to go but the unemployment line.
  • There will be no place to go but the unemployment line.
  • at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
  • ...34 more annotations...
  • The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor marke
  • The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently. In other words, the Luddites weren't wrong. They were just 200 years too early
  • Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
  • Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship," she says, "is already becoming the new normal."
  • robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
  • Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
  • while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true.
  • Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten in a race to the bottom.
  • The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy?
  • if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed.
  • Second, we'd expect to see fewer job openings than in the past.
  • In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change
  • Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less in new products and new factories
  • Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
  • We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
  • Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them
  • in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.
  • the first jobs to go will be middle-skill jobs. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles
  • in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
  • In fact, there's even a digital sports writer. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too
  • Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
  • Take driverless cars.
  • The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced
  • There will be no place to go but the unemployment lin
  • we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
  • The modern economy is complex, and most of these trends have multiple causes.
  • we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it."
  • would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest?
  • Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
  •  economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"
  • In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society.
  • it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
  • When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see
  • A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
Javier E

Reality is your brain's best guess - Big Think - 0 views

  • Andy Clark admits it’s strange that he took up “predictive processing,” an ambitious leading theory of how the brain works. A philosopher of mind at the University of Sussex, he has devoted his career to how thinking doesn’t occur just between the ears—that it flows through our bodies, tools, and environments. “The external world is functioning as part of our cognitive machinery
  • But 15 years ago, he realized that had to come back to the center of the system: the brain. And he found that predictive processing provided the essential links among the brain, body, and world.
  • There’s a traditional view that goes back at least to Descartes that perception was about the imprinting of the outside world onto the sense organs. In 20th-century artificial intelligence and neuroscience, vision was a feed-forward process in which you took in pixel-level information, refined it into a two and a half–dimensional sketch, and then refined that into a full world model.
  • ...9 more annotations...
  • a new book, The Experience Machine: How Our Minds Predict and Shape Reality, which is remarkable for how it connects the high-level concepts to everyday examples of how our brains make predictions, how that process can lead us astray, and what we can do about it.
  • being driven to stay within your own viability envelope is crucial to the kind of intelligence that we know about—the kind of intelligence that we are
  • If you ask what is a predictive brain for, the answer has to be: staying alive. Predictive brains are a way of staying within your viability envelope as an embodied biological organism: getting food when you need it, getting water when you need it.
  • in predictive processing, perception is structured around prediction. Perception is about the brain having a guess at what’s most likely to be out there and then using sensory information to refine the guess.
  • artificial curiosity. Predictive-processing systems automatically have that. They’re set up so that they predict the conditions of their own survival, and they’re always trying to get rid of prediction errors. But if they’ve solved all their practical problems and they’ve got nothing else to do, then they’ll just explore. Getting rid of any error is going to be a good thing for them. If you’re a creature like that, you’re going to be a really good learning system. You’re going to love to inhabit the environments that you can learn most from, where the problems are not too simple, not too hard, but just right.
  • It’s an effect that you also see in Marieke Jepma et al.’s work on pain. They showed that if you predict intense pain, the signal that you get will be interpreted as more painful than it would otherwise be, and vice versa. Then they asked why you don’t correct your misimpression. If it’s my expectation that is making it feel more painful, why don’t I get prediction errors that correct it?
  • The reason is that there are no errors. You’re expecting a certain level of pain, and your prediction helps bring that level about; there is nothing for you to correct. In fact, you’ve got confirmation of your own prediction. So it can be a vicious circle
  • Do you think this self-fulfilling loop in psychosis and pain perception helps to account for misinformation in our society’s and people’s susceptibility to certain narratives?Absolutely. We all have these vulnerabilities and self-fulfilling cycles. We look at the places that tend to support the models that we already have, because that’s often how we judge whether the information is good or not
  • Given that we know we’re vulnerable to self-fulfilling information loops, how can we make sure we don’t get locked into a belief?Unfortunately, it’s really difficult. The most potent intervention is to remind ourselves that we sample the world in ways that are guided by the models that we’ve currently got. The structures of science are there to push back against our natural tendency to cherry-pick.
Javier E

Yuval Noah Harari paints a grim picture of the AI age, roots for safety checks | Techno... - 0 views

  • Yuval Noah Harari, known for the acclaimed non-fiction book Sapiens: A Brief History of Mankind, in his latest article in The Economist, has said that artificial intelligence has “hacked” the operating system of human civilization
  • he said that the newly emerged AI tools in recent years could threaten the survival of human civilization from an “unexpected direction.”
  • He demonstrated how AI could impact culture by talking about language, which is integral to human culture. “Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artifacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artifacts we created by inventing myths and writing scriptures,” wrote Harari.
  • ...8 more annotations...
  • He stated that democracy is also a language that dwells on meaningful conversations, and when AI hacks language it could also destroy democracy.
  • The 47-year-old wrote that the biggest challenge of the AI age was not the creation of intelligent tools but striking a collaboration between humans and machines.
  • To highlight the extent of how AI-driven misinformation can change the course of events, Harari touched upon the cult QAnon, a political movement affiliated with the far-right in the US. QAnon disseminated misinformation via “Q drops” that were seen as sacred by followers.
  • Harari also shed light on how AI could form intimate relationships with people and influence their decisions. “Through its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews,” he wrote. To demonstrate this, he cited the example of Blake Lemoine, a Google engineer who lost his job after publicly claiming that the AI chatbot LaMDA had become sentient. According to the historian, the controversial claim cost Lemoine his job. He asked if AI can influence people to risk their jobs, what else could it induce them to do?
  • Harari also said that intimacy was an effective weapon in the political battle of minds and hearts. He said that in the past few years, social media has become a battleground for controlling human attention, and the new generation of AI can convince people to vote for a particular politician or buy a certain product.
  • In his bid to call attention to the need to regulate AI technology, Harari said that the first regulation should be to make it mandatory for AI to disclose that it is an AI. He said it was important to put a halt on ‘irresponsible deployment’ of AI tools in the public domain, and regulating it before it regulates us.
  • The author also shed light on the fact that how the current social and political systems are incapable of dealing with the challenges posed by AI. Harari emphasised the need to have an ethical framework to respond to challenges posed by AI.
  • He argued that while GPT-3 had made remarkable progress, it was far from replacing human interactions
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

Silicon Valley's Safe Space - The New York Times - 0 views

  • The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
  • Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was “A.I. safety.”
  • The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
  • ...27 more annotations...
  • “The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teens a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
  • Some lived in group houses. Some practiced polyamory. “They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
  • For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
  • Yes, the community thought about A.I., she said, but it also thought about reducing the price of health care and slowing the spread of disease.
  • Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
  • That was not the Rationalist way.
  • “There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
  • Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.It was, he said, essential reading among “the people inventing the future” in the tech industry.
  • Mr. Altman, who had risen to prominence as the president of the start-up accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.The essay was a critique of what Mr. Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
  • But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
  • Mr. Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
  • Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
  • The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up,” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
  • The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
  • And in some ways, two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement.
  • In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel’s San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel’s firm, the two created an A.I. lab called DeepMind.
  • Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
  • In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
  • Mr. Aaronson, the University of Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
  • “It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said
  • In late June of last year, not long after talking to Mr. Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
  • The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
  • More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. “Putting his full name in The Times,” the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
  • I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the influential tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
  • I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
  • When I asked Mr. Altman if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
  • In August, Mr. Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
Roth johnson

If I Had a Hammer - NYTimes.com - 0 views

  •  
    Interesting article on artificial intelligence. How "first machines" (machines that needed input) are disappearing and "second machines" (machines that can make decisions for themselves) are taking the places of white collar and blue collar jobs.
Javier E

Armies of Expensive Lawyers, Replaced by Cheaper Software - NYTimes.com - 0 views

  • thanks to advances in artificial intelligence, “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost.
  • Computers are getting better at mimicking human reasoning — as viewers of “Jeopardy!” found out when they saw Watson beat its human opponents — and they are claiming work once done by people in high-paying professions. The number of computer chip designers, for example, has largely stagnated because powerful software programs replace the work once done by legions of logic designers and draftsmen.
  • Software is also making its way into tasks that were the exclusive province of human decision makers, like loan and mortgage officers and tax accountants.
  • ...4 more annotations...
  • “We’re at the beginning of a 10-year period where we’re going to transition from computers that can’t understand language to a point where computers can understand quite a bit about language.”
  • E-discovery technologies generally fall into two broad categories that can be described as “linguistic” and “sociological.”
  • The most basic linguistic approach uses specific search words to find and sort relevant documents. More advanced programs filter documents through a large web of word and phrase definitions.
  • The sociological approach adds an inferential layer of analysis, mimicking the deductive powers of a human Sherlock Holmes
anonymous

Controversial Quantum Machine Tested by NASA and Google Shows Promise | MIT Technology ... - 0 views

  • artificial-intelligence software.
  • Google says it has proof that a controversial machine it bought in 2013 really can use quantum physics to work through a type of math that’s crucial to artificial intelligence much faster than a conventional computer.
  • “It is a truly disruptive technology that could change how we do everything,” said Rupak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California.
  • ...7 more annotations...
  • An alternative algorithm is known that could have let the conventional computer be more competitive, or even win, by exploiting what Neven called a “bug” in D-Wave’s design. Neven said the test his group staged is still important because that shortcut won’t be available to regular computers when they compete with future quantum annealers capable of working on larger amounts of data.
  • “For a specific, carefully crafted proof-of-concept problem we achieve a 100-million-fold speed-up,” said Neven.
  • “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer.
  • Google is competing with D-Wave to make a quantum annealer that could do useful work.
  • Martinis is also working on quantum hardware that would not be limited to optimization problems, as annealers are.
  • Government and university labs, Microsoft (see “Microsoft’s Quantum Mechanics”), and IBM (see “IBM Shows Off a Quantum Computing Chip”) are also working on that technology.
  • “it may be several years before this research makes a difference to Google products.”
Javier E

Opinion | A.I. Is Harder Than You Think - The New York Times - 1 views

  • The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data.
  • The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language. Just as you can make infinitely many arithmetic equations by combining a few mathematical symbols and following a small set of rules, you can make infinitely many sentences by combining a modest set of words and a modest set of rules.
  • A genuine, human-level A.I. will need to be able to cope with all of those possible sentences, not just a small fragment of them.
  • ...3 more annotations...
  • No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
  • Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs.
  • That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
sissij

Prejudice AI? Machine Learning Can Pick up Society's Biases | Big Think - 1 views

  • We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion.
  • They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.
  • After interacting with certain users, she began spouting racist remarks.
  • ...2 more annotations...
  • It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to.
  • AI will have to be programmed to embrace equality.
  •  
    I just feel like this is so ironic. As the parents of the AI, humans themselves can't even be equal , how can we expect the robot we made to be perform perfect humanity and embrace flawless equality. I think equality itself is flawed. How can we define equality? Just like we cannot define fairness, we cannot define equality. I think this robot picking up racist remarks just shows that how children become racist. It also reflects how powerful the cultural context and social norms are. They can shape us subconsciously. --Sissi (4/20/2017)
Javier E

Accelerationism: how a fringe philosophy predicted the future we live in | World news |... - 1 views

  • Roger Zelazny, published his third novel. In many ways, Lord of Light was of its time, shaggy with imported Hindu mythology and cosmic dialogue. Yet there were also glints of something more forward-looking and political.
  • accelerationism has gradually solidified from a fictional device into an actual intellectual movement: a new way of thinking about the contemporary world and its potential.
  • Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative.
  • ...31 more annotations...
  • Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled.
  • Accelerationism, therefore, goes against conservatism, traditional socialism, social democracy, environmentalism, protectionism, populism, nationalism, localism and all the other ideologies that have sought to moderate or reverse the already hugely disruptive, seemingly runaway pace of change in the modern world
  • Robin Mackay and Armen Avanessian in their introduction to #Accelerate: The Accelerationist Reader, a sometimes baffling, sometimes exhilarating book, published in 2014, which remains the only proper guide to the movement in existence.
  • “We all live in an operating system set up by the accelerating triad of war, capitalism and emergent AI,” says Steve Goodman, a British accelerationist
  • A century ago, the writers and artists of the Italian futurist movement fell in love with the machines of the industrial era and their apparent ability to invigorate society. Many futurists followed this fascination into war-mongering and fascism.
  • One of the central figures of accelerationism is the British philosopher Nick Land, who taught at Warwick University in the 1990s
  • Land has published prolifically on the internet, not always under his own name, about the supposed obsolescence of western democracy; he has also written approvingly about “human biodiversity” and “capitalistic human sorting” – the pseudoscientific idea, currently popular on the far right, that different races “naturally” fare differently in the modern world; and about the supposedly inevitable “disintegration of the human species” when artificial intelligence improves sufficiently.
  • In our politically febrile times, the impatient, intemperate, possibly revolutionary ideas of accelerationism feel relevant, or at least intriguing, as never before. Noys says: “Accelerationists always seem to have an answer. If capitalism is going fast, they say it needs to go faster. If capitalism hits a bump in the road, and slows down” – as it has since the 2008 financial crisis – “they say it needs to be kickstarted.”
  • On alt-right blogs, Land in particular has become a name to conjure with. Commenters have excitedly noted the connections between some of his ideas and the thinking of both the libertarian Silicon Valley billionaire Peter Thiel and Trump’s iconoclastic strategist Steve Bannon.
  • “In Silicon Valley,” says Fred Turner, a leading historian of America’s digital industries, “accelerationism is part of a whole movement which is saying, we don’t need [conventional] politics any more, we can get rid of ‘left’ and ‘right’, if we just get technology right. Accelerationism also fits with how electronic devices are marketed – the promise that, finally, they will help us leave the material world, all the mess of the physical, far behind.”
  • In 1972, the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari published Anti-Oedipus. It was a restless, sprawling, appealingly ambiguous book, which suggested that, rather than simply oppose capitalism, the left should acknowledge its ability to liberate as well as oppress people, and should seek to strengthen these anarchic tendencies, “to go still further … in the movement of the market … to ‘accelerate the process’”.
  • By the early 90s Land had distilled his reading, which included Deleuze and Guattari and Lyotard, into a set of ideas and a writing style that, to his students at least, were visionary and thrillingly dangerous. Land wrote in 1992 that capitalism had never been properly unleashed, but instead had always been held back by politics, “the last great sentimental indulgence of mankind”. He dismissed Europe as a sclerotic, increasingly marginal place, “the racial trash-can of Asia”. And he saw civilisation everywhere accelerating towards an apocalypse: “Disorder must increase... Any [human] organisation is ... a mere ... detour in the inexorable death-flow.”
  • With the internet becoming part of everyday life for the first time, and capitalism seemingly triumphant after the collapse of communism in 1989, a belief that the future would be almost entirely shaped by computers and globalisation – the accelerated “movement of the market” that Deleuze and Guattari had called for two decades earlier – spread across British and American academia and politics during the 90s. The Warwick accelerationists were in the vanguard.
  • In the US, confident, rainbow-coloured magazines such as Wired promoted what became known as “the Californian ideology”: the optimistic claim that human potential would be unlocked everywhere by digital technology. In Britain, this optimism influenced New Labour
  • The Warwick accelerationists saw themselves as participants, not traditional academic observers
  • The CCRU gang formed reading groups and set up conferences and journals. They squeezed into the narrow CCRU room in the philosophy department and gave each other impromptu seminars.
  • The main result of the CCRU’s frantic, promiscuous research was a conveyor belt of cryptic articles, crammed with invented terms, sometimes speculative to the point of being fiction.
  • At Warwick, however, the prophecies were darker. “One of our motives,” says Plant, “was precisely to undermine the cheery utopianism of the 90s, much of which seemed very conservative” – an old-fashioned male desire for salvation through gadgets, in her view.
  • K-punk was written by Mark Fisher, formerly of the CCRU. The blog retained some Warwick traits, such as quoting reverently from Deleuze and Guattari, but it gradually shed the CCRU’s aggressive rhetoric and pro-capitalist politics for a more forgiving, more left-leaning take on modernity. Fisher increasingly felt that capitalism was a disappointment to accelerationists, with its cautious, entrenched corporations and endless cycles of essentially the same products. But he was also impatient with the left, which he thought was ignoring new technology
  • lex Williams, co-wrote a Manifesto for an Accelerationist Politics. “Capitalism has begun to constrain the productive forces of technology,” they wrote. “[Our version of] accelerationism is the basic belief that these capacities can and should be let loose … repurposed towards common ends … towards an alternative modernity.”
  • What that “alternative modernity” might be was barely, but seductively, sketched out, with fleeting references to reduced working hours, to technology being used to reduce social conflict rather than exacerbate it, and to humanity moving “beyond the limitations of the earth and our own immediate bodily forms”. On politics and philosophy blogs from Britain to the US and Italy, the notion spread that Srnicek and Williams had founded a new political philosophy: “left accelerationism”.
  • Two years later, in 2015, they expanded the manifesto into a slightly more concrete book, Inventing the Future. It argued for an economy based as far as possible on automation, with the jobs, working hours and wages lost replaced by a universal basic income. The book attracted more attention than a speculative leftwing work had for years, with interest and praise from intellectually curious leftists
  • Even the thinking of the arch-accelerationist Nick Land, who is 55 now, may be slowing down. Since 2013, he has become a guru for the US-based far-right movement neoreaction, or NRx as it often calls itself. Neoreactionaries believe in the replacement of modern nation-states, democracy and government bureaucracies by authoritarian city states, which on neoreaction blogs sound as much like idealised medieval kingdoms as they do modern enclaves such as Singapore.
  • Land argues now that neoreaction, like Trump and Brexit, is something that accelerationists should support, in order to hasten the end of the status quo.
  • In 1970, the American writer Alvin Toffler, an exponent of accelerationism’s more playful intellectual cousin, futurology, published Future Shock, a book about the possibilities and dangers of new technology. Toffler predicted the imminent arrival of artificial intelligence, cryonics, cloning and robots working behind airline check-in desks
  • Land left Britain. He moved to Taiwan “early in the new millennium”, he told me, then to Shanghai “a couple of years later”. He still lives there now.
  • In a 2004 article for the Shanghai Star, an English-language paper, he described the modern Chinese fusion of Marxism and capitalism as “the greatest political engine of social and economic development the world has ever known”
  • Once he lived there, Land told me, he realised that “to a massive degree” China was already an accelerationist society: fixated by the future and changing at speed. Presented with the sweeping projects of the Chinese state, his previous, libertarian contempt for the capabilities of governments fell away
  • Without a dynamic capitalism to feed off, as Deleuze and Guattari had in the early 70s, and the Warwick philosophers had in the 90s, it may be that accelerationism just races up blind alleys. In his 2014 book about the movement, Malign Velocities, Benjamin Noys accuses it of offering “false” solutions to current technological and economic dilemmas. With accelerationism, he writes, a breakthrough to a better future is “always promised and always just out of reach”.
  • “The pace of change accelerates,” concluded a documentary version of the book, with a slightly hammy voiceover by Orson Welles. “We are living through one of the greatest revolutions in history – the birth of a new civilisation.”
  • Shortly afterwards, the 1973 oil crisis struck. World capitalism did not accelerate again for almost a decade. For much of the “new civilisation” Toffler promised, we are still waiting
Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
Javier E

The Lasting Lessons of John Conway's Game of Life - The New York Times - 0 views

  • “Because of its analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called ‘simulation games,’” Mr. Gardner wrote when he introduced Life to the world 50 years ago with his October 1970 column.
  • The Game of Life motivated the use of cellular automata in the rich field of complexity science, with simulations modeling everything from ants to traffic, clouds to galaxies. More trivially, the game attracted a cult of “Lifenthusiasts,” programmers who spent a lot of time hacking Life — that is, constructing patterns in hopes of spotting new Life-forms.
  • The tree of Life also includes oscillators, such as the blinker, and spaceships of various sizes (the glider being the smallest).
  • ...24 more annotations...
  • Patterns that didn’t change one generation to the next, Dr. Conway called still lifes — such as the four-celled block, the six-celled beehive or the eight-celled pond. Patterns that took a long time to stabilize, he called methuselahs.
  • The second thing Life shows us is something that Darwin hit upon when he was looking at Life, the organic version. Complexity arises from simplicity!
  • I first encountered Life at the Exploratorium in San Francisco in 1978. I was hooked immediately by the thing that has always hooked me — watching complexity arise out of simplicity.
  • Life shows you two things. The first is sensitivity to initial conditions. A tiny change in the rules can produce a huge difference in the output, ranging from complete destruction (no dots) through stasis (a frozen pattern) to patterns that keep changing as they unfold.
  • Life shows us complex virtual “organisms” arising out of the interaction of a few simple rules — so goodbye “Intelligent Design.”
  • I’ve wondered for decades what one could learn from all that Life hacking. I recently realized it’s a great place to try to develop “meta-engineering” — to see if there are general principles that govern the advance of engineering and help us predict the overall future trajectory of technology.
  • Melanie Mitchell— Professor of complexity, Santa Fe Institute
  • Given that Conway’s proof that the Game of Life can be made to simulate a Universal Computer — that is, it could be “programmed” to carry out any computation that a traditional computer can do — the extremely simple rules can give rise to the most complex and most unpredictable behavior possible. This means that there are certain properties of the Game of Life that can never be predicted, even in principle!
  • I use the Game of Life to make vivid for my students the ideas of determinism, higher-order patterns and information. One of its great features is that nothing is hidden; there are no black boxes in Life, so you know from the outset that anything that you can get to happen in the Life world is completely unmysterious and explicable in terms of a very large number of simple steps by small items.
  • In Thomas Pynchon’s novel “Gravity’s Rainbow,” a character says, “But you had taken on a greater and more harmful illusion. The illusion of control. That A could do B. But that was false. Completely. No one can do. Things only happen.”This is compelling but wrong, and Life is a great way of showing this.
  • In Life, we might say, things only happen at the pixel level; nothing controls anything, nothing does anything. But that doesn’t mean that there is no such thing as action, as control; it means that these are higher-level phenomena composed (entirely, with no magic) from things that only happen.
  • Stephen Wolfram— Scientist and C.E.O., Wolfram Research
  • Brian Eno— Musician, London
  • Bert Chan— Artificial-life researcher and creator of the continuous cellular automaton “Lenia,” Hong Kong
  • it did have a big impact on beginner programmers, like me in the 90s, giving them a sense of wonder and a kind of confidence that some easy-to-code math models can produce complex and beautiful results. It’s like a starter kit for future software engineers and hackers, together with Mandelbrot Set, Lorenz Attractor, et cetera.
  • if we think about our everyday life, about corporations and governments, the cultural and technical infrastructures humans built for thousands of years, they are not unlike the incredible machines that are engineered in Life.
  • In normal times, they are stable and we can keep building stuff one component upon another, but in harder times like this pandemic or a new Cold War, we need something that is more resilient and can prepare for the unpreparable. That would need changes in our “rules of life,” which we take for granted.
  • Rudy Rucker— Mathematician and author of “Ware Tetralogy,” Los Gatos, Calif.
  • That’s what chaos is about. The Game of Life, or a kinky dynamical system like a pair of pendulums, or a candle flame, or an ocean wave, or the growth of a plant — they aren’t readily predictable. But they are not random. They do obey laws, and there are certain kinds of patterns — chaotic attractors — that they tend to produce. But again, unpredictable is not random. An important and subtle distinction which changed my whole view of the world.
  • William Poundstone— Author of “The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge,” Los Angeles, Calif.
  • The Game of Life’s pulsing, pyrotechnic constellations are classic examples of emergent phenomena, introduced decades before that adjective became a buzzword.
  • Fifty years later, the misfortunes of 2020 are the stuff of memes. The biggest challenges facing us today are emergent: viruses leaping from species to species; the abrupt onset of wildfires and tropical storms as a consequence of a small rise in temperature; economies in which billions of free transactions lead to staggering concentrations of wealth; an internet that becomes more fraught with hazard each year
  • Looming behind it all is our collective vision of an artificial intelligence-fueled future that is certain to come with surprises, not all of them pleasant.
  • The name Conway chose — the Game of Life — frames his invention as a metaphor. But I’m not sure that even he anticipated how relevant Life would become, and that in 50 years we’d all be playing an emergent game of life and death.
‹ Previous 21 - 40 of 99 Next › Last »
Showing 20 items per page