Skip to main content

Home/ TOK@ISPrague/ Group items matching ""artificial intelligence"" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
1More

BBC - Culture - The greatest mistranslations ever - 0 views

  •  
    "Life on Mars When Italian astronomer Giovanni Virginio Schiaparelli began mapping Mars in 1877, he inadvertently sparked an entire science-fiction oeuvre. The director of Milan's Brera Observatory dubbed dark and light areas on the planet's surface 'seas' and 'continents' - labelling what he thought were channels with the Italian word 'canali'. Unfortunately, his peers translated that as 'canals', launching a theory that they had been created by intelligent lifeforms on Mars. Convinced that the canals were real, US astronomer Percival Lowell mapped hundreds of them between 1894 and 1895. Over the following two decades he published three books on Mars with illustrations showing what he thought were artificial structures built to carry water by a brilliant race of engineers. One writer influenced by Lowell's theories published his own book about intelligent Martians. In The War of the Worlds, which first appeared in serialised form in 1897, H G Wells described an invasion of Earth by deadly Martians and spawned a sci-fi subgenre. A Princess of Mars, a novel by Edgar Rice Burroughs published in 1911, also features a dying Martian civilisation, using Schiaparelli's names for features on the planet. While the water-carrying artificial trenches were a product of language and a feverish imagination, astronomers now agree that there aren't any channels on the surface of Mars. According to Nasa, "The network of crisscrossing lines covering the surface of Mars was only a product of the human tendency to see patterns, even when patterns do not exist. When looking at a faint group of dark smudges, the eye tends to connect them with straight lines." "
1More

What Makes an Alien Intelligent? : The New Yorker - 0 views

  • Herzing’s paper proposes five indicators of intelligence that any given species or machine (she includes artificial intelligence in her assessment) might combine in its own way: first, the size of the subject’s brain (if it has one) relative to the rest of the body; second, the extent to which an entity sends and receives information; third, the degree to which individual members of a species are distinct from one another; fourth, the complexity of the being’s social life; and, fifth, the amount of interaction it has with members of other species. One way to be intelligent is to score high on all five measures, as dolphins do, for instance.
1More

The Doomsday Invention - The New Yorker - 1 views

  • Bos­trom writes, “Artificial intelligence already outperforms human intelligence in many domains.” The examples range from chess to Scrabble. One program from 1981, called Eurisko, was designed to teach itself a naval role-playing game. After playing ten thousand matches, it arrived at a morally grotesque strategy: to field thousands of small, immobile ships, the vast majority of which were intended as cannon fodder. In a national tournament, Eurisko demolished its human opponents, who insisted that the game’s rules be changed. The following year, Eurisko won again—by forcing its damaged ships to sink themselves.
1More

If This Doesn't Terrify You … Google's Computers OUTWIT Their Humans | Fluenc... - 0 views

  •  
    "Google reached a milestone in artificial intelligence recently. Its deep learning image recognition system has evolved so far that it's own creators can't explain its capabilities."
4More

The Pentagon's 'Terminator Conundrum': Robots That Could Kill on Their Own - The New Yo... - 1 views

  • Just as the Industrial Revolution spurred the creation of powerful and destructive machines like airplanes and tanks that diminished the role of individual soldiers, artificial intelligence technology is enabling the Pentagon to reorder the places of man and machine on the battlefield the same way it is transforming ordinary life with computers that can see, hear and speak and cars that can drive themselves.
  • The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them. Gen. Paul J. Selva of the Air Force, the vice chairman of the Joint Chiefs of Staff, said recently that the United States was about a decade away from having the technology to build a fully independent robot that could decide on its own whom and when to kill, though it had no intention of building one.
  • Armed with a variation of human and facial recognition software used by American intelligence agencies, the drone adroitly tracked moving cars and picked out enemies hiding along walls. It even correctly figured out that no threat was posed by a photographer who was crouching, camera raised to eye level and pointed at the drone, a situation that has confused human soldiers with fatal results.
  • ...1 more annotation...
  • Today’s software has its limits, though. Computers spot patterns far faster than any human can. But the ability to handle uncertainty and unpredictability remain uniquely human virtues, for now.
2More

The Great A.I. Awakening - The New York Times - 1 views

  • Translation, however, is an example of a field where this approach fails horribly, because words cannot be reduced to their dictionary definitions, and because languages tend to have as many exceptions as they have rules. More often than not, a system like this is liable to translate “minister of agriculture” as “priest of farming.” Still, for math and chess it worked great, and the proponents of symbolic A.I. took it for granted that no activities signaled “general intelligence” better than math and chess.
  • A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.
1More

Artificial intelligence's "paper-clip maximizer" metaphor can explain humanity's immine... - 1 views

  • The thought experiment is meant to show how an optimization algorithm, even if designed with no malicious intent, could ultimately destroy the world.
1More

Can A.I. Be Taught to Explain Itself? - The New York Times - 1 views

  • As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.
1More

Eight (No, Nine!) Problems With Big Data - NYTimes.com - 0 views

  • Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. “Jeopardy!” champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful.
3More

Germanwings 9525, Technology, and the Question of Trust - The New Yorker - 2 views

  • hortly before the dreadful crash of Germanwings Flight 9525, I happened to be reading part of “The Second Machine Age,” a book by two academics at M.I.T., Erik Brynjolfsson and Andrew McAfee, about the coming automation of many professions previously thought of as impervious to technological change, such as those of drivers, doctors, market researchers, and soldiers. With the advances being made in robotics, data analysis, and artificial intelligence, Brynjolfsson and McAfee argue, we are on the cusp of a third industrial revolution.
  • The U.S. military appears to be moving in the direction of eliminating pilots, albeit tentatively. The Pentagon and the C.I.A. have long operated unmanned drones, including the Predator, which are used for reconnaissance and bombing missions. In 2013, the U.S Air Force successfully tested the QF-16 fighter-bomber, which is practically identical to the F-16, except that it doesn’t have a pilot onboard. The plane is flown remotely. Earlier this year, Boeing, the manufacturer of the QF-16, delivered the first of what will be more than a hundred QF-16s to the Air Force. Initially, the planes will be used as flying targets for F-16 pilots to engage during training missions. But at least some military observers expect the QF-16 to end up being used in attack missions.
  • Until now, most executives in the airline industry have assumed that few people would be willing to book themselves and their families on unmanned flights—and they haven’t seriously considered turning commercial aircraft into drones or self-operating vehicles. By placing experienced fliers in the cockpit, the airlines signal to potential customers that their safety is of paramount importance—and not only because the crew members are skilled; their safety is at stake, too. In the language of game theory, this makes the aircraft’s commitment to safety more credible. Without a human flight crew, how could airlines send the same signal?
2More

​When Superintelligent AI Arrives, Will Religions Try to Convert It? - 1 views

  • As artificial intelligence advances, religious questions and concerns globally are bound to come up, and they're starting too: Some theologians and futurists are already considering whether AI can also know God. "I don't see Christ's redemption limited to human beings," Reverend Dr. Christopher J. Benek told me in a recent interview
  • But there is an opposing school of thought that insists that AI is a machine and therefore doesn't have a soul.
1More

Why Marvel's Female Superheroes Look Like Porn Stars - The New Yorker - 3 views

  • Last week, Marvel launched a new Avengers movie, “Age of Ultron,” and this month it’s launching a new comic book, “A-Force.” Ultron is a robot with artificial intelligence who believes that the only way to achieve peace on earth is to exterminate the human race. The A-Force is a race of lady Avengers, led by She-Hulk, who come from a “feminist paradise,” but I don’t know what that could possibly mean, because they all look like porn stars.
1More

The A.I. "Gaydar" Study and the Real Dangers of Big Data | The New Yorker - 2 views

  • The researchers culled tens of thousands of photos from an online-dating site, then used an off-the-shelf computer model to extract users’ facial characteristics—both transient ones, like eye makeup and hair color, and more fixed ones, like jaw shape. Then they fed the data into their own model, which classified users by their apparent sexuality. When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time; for women, its accuracy dropped slightly, to seventy-one per cent.
1More

This Cat Sensed Death. What if Computers Could, Too? - The New York Times - 0 views

  • So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?”
1 - 15 of 15
Showing 20 items per page