Skip to main content

Home/ TOK Friends/ Group items tagged intelligence

Rss Feed Group items tagged

Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

Opinion | Michael Hayden: The End of Intelligence - The New York Times - 0 views

  • To adopt post-truth thinking is to depart from Enlightenment ideas, dominant in the West since the 17th century, that value experience and expertise, the centrality of fact, humility in the face of complexity, the need for study and a respect for ideas.
  • the Trump campaign normalized lying to an unprecedented degree.
  • When pressed on specifics, the president has routinely denigrated those who questioned him, whether the “fake” media, “so called” judges, Washington insiders or the “deep state.” He has also condemned Obama-era intelligence officials as “political hacks.”
  • ...15 more annotations...
  • you could sometimes convince a liar that he was wrong. What do you do with someone who does not distinguish between truth and untruth?
  • How the erosion of Enlightenment values threatens good intelligence was obvious in the Trump administration’s ill-conceived and poorly carried out executive order that looked to the world like a Muslim ban.
  • They didn’t seem very interested in facts, either. Or at least not in my facts. Political partisanship in America has become what David Brooks calls “totalistic.” Partisan identity, as he writes, fills “the void left when their other attachments wither away — religious, ethnic, communal and familial.” Beliefs are now so tied to these identities that data is not particularly useful to argue a point.
  • Intelligence work — at least as practiced in the Western liberal tradition — reflects these threatened Enlightenment values: gathering, evaluating and analyzing information, and then disseminating conclusions for use, study or refutation.
  • we have never served a president for whom ground truth really doesn’t matter.
  • The president by all accounts is not a patient man. According to The Washington Post, one Trump confidant called him “the two-minute man” with “patience for a half page.”
  • Over time it has become clear to me that security decisions in the Trump administration follow a certain pattern. Discussion seems to start with a presidential statement or tweet. Then follows a large-scale effort to inform the president, to impress upon him the complexity of an issue, to review the relevant history, to surface more factors bearing on the problem, to raise second- and third-order consequences and to explore subsequent moves.
  • He insists on five-page or shorter intelligence briefs, rather than the 60 pages we typically gave previous presidents. There is something inherently disturbing in that. There are some problems that cannot be simplified.
  • Intelligence becomes a feeble academic exercise if it is not relevant and useful
  • History — and the next president — will judge American intelligence, and if it is found to have been too accommodating to this or any other president, it will be disastrous for the community.
  • These are truly uncharted waters for the country. We have in the past argued over the values to be applied to objective reality, or occasionally over what constituted objective reality, but never the existence or relevance of objective reality itself.
  • In this post-truth world, intelligence agencies are in the bunker with some unlikely mates: journalism, academia, the courts, law enforcement and science — all of which, like intelligence gathering, are evidence-based.
  • Intelligence shares a broader duty with these other truth-tellers to preserve the commitment and ability of our society to base important decisions on our best judgment of what constitutes objective reality.
  • The historian Timothy Snyder stresses the importance of reality and truth in his cautionary pamphlet, “On Tyranny.” “To abandon facts,” he writes, “is to abandon freedom. If nothing is true, then no one can criticize power because there is no basis upon which to do so.” He then chillingly observes, “Post-truth is pre-fascism.”
  • we traditionally rely on their truth-telling to protect us from our enemies. Now we need it to save us from ourselves.
Javier E

Software Is Smart Enough for SAT, but Still Far From Intelligent - The New York Times - 0 views

  • An artificial intelligence software program capable of seeing and reading has for the first time answered geometry questions from the SAT at the level of an average 11th grader.
  • The software had to combine machine vision to understand diagrams with the ability to read and understand complete sentences; its success represents a breakthrough in artificial intelligence.
  • Despite the advance, however, the researchers acknowledge that the program’s abilities underscore how far scientists have to go to create software capable of mimicking human intelligence.
  • ...9 more annotations...
  • designer of the test-taking program, noted that even a simple task for children, like understanding the meaning of an arrow in the context of a test diagram, was not yet something the most advanced A.I. programs could do reliably.
  • scientific workshops intended to develop more accurate methods than the Turing test for measuring the capabilities of artificial intelligence programs.
  • Researchers in the field are now developing a wide range of gauges to measure intelligence — including the Allen Institute’s standardized-test approach and a task that Dr. Marcus proposed, which he called the “Ikea construction challenge.” That test would provide an A.I. program with a bag of parts and an instruction sheet and require it to assemble a piece of furniture.
  • First proposed in 2011 by Hector Levesque, a University of Toronto computer scientist, the Winograd Schema Challenge would pose questions that require real-world logic to A.I. programs. A question might be: “The trophy would not fit in the brown suitcase because it was too big. What was too big, A: the trophy or B: the suitcase?” Answering this question would require a program to reason spatially and have specific knowledge about the size of objects.
  • Within the A.I. community, discussions about software programs that can reason in a humanlike way are significant because recent progress in the field has been more focused on improving perception, not reasoning.
  • GeoSolver, or GeoS, was described at the Conference on Empirical Methods on Natural Language Processing in Lisbon this weekend. It operates by separately generating a series of logical equations, which serve as components of possible answers, from the text and the diagram in the question. It then weighs the accuracy of the equations and tries to discern whether its interpretation of the diagram and text is strong enough to select one of the multiple-choice answers.
  • Ultimately, Dr. Marcus said, he believed that progress in artificial intelligence would require multiple tests, just as multiple tests are used to assess human performance.
  • “There is no one measure of human intelligence,” he said. “Why should there be just one A.I. test?”
  • In the 1960s, Hubert Dreyfus, a philosophy professor at the University of California, Berkeley, expressed this skepticism most clearly when he wrote, “Believing that writing these types of programs will bring us closer to real artificial intelligence is like believing that someone climbing a tree is making progress toward reaching the moon.”
Javier E

Watson Still Can't Think - NYTimes.com - 0 views

  • Fish argued that Watson “does not come within a million miles of replicating the achievements of everyday human action and thought.” In defending this claim, Fish invoked arguments that one of us (Dreyfus) articulated almost 40 years ago in “What Computers Can’t Do,” a criticism of 1960s and 1970s style artificial intelligence.
  • At the dawn of the AI era the dominant approach to creating intelligent systems was based on finding the right rules for the computer to follow.
  • GOFAI, for Good Old Fashioned Artificial Intelligence.
  • ...12 more annotations...
  • For constrained domains the GOFAI approach is a winning strategy.
  • there is nothing intelligent or even interesting about the brute force approach.
  • the dominant paradigm in AI research has largely “moved on from GOFAI to embodied, distributed intelligence.” And Faustus from Cincinnati insists that as a result “machines with bodies that experience the world and act on it” will be “able to achieve intelligence.”
  • The new, embodied paradigm in AI, deriving primarily from the work of roboticist Rodney Brooks, insists that the body is required for intelligence. Indeed, Brooks’s classic 1990 paper, “Elephants Don’t Play Chess,” rejected the very symbolic computation paradigm against which Dreyfus had railed, favoring instead a range of biologically inspired robots that could solve apparently simple, but actually quite complicated, problems like locomotion, grasping, navigation through physical environments and so on. To solve these problems, Brooks discovered that it was actually a disadvantage for the system to represent the status of the environment and respond to it on the basis of pre-programmed rules about what to do, as the traditional GOFAI systems had. Instead, Brooks insisted, “It is better to use the world as its own model.”
  • although they respond to the physical world rather well, they tend to be oblivious to the global, social moods in which we find ourselves embedded essentially from birth, and in virtue of which things matter to us in the first place.
  • the embodied AI paradigm is irrelevant to Watson. After all, Watson has no useful bodily interaction with the world at all.
  • The statistical machine learning strategies that it uses are indeed a big advance over traditional GOFAI techniques. But they still fall far short of what human beings do.
  • “The illusion is that this computer is doing the same thing that a very good ‘Jeopardy!’ player would do. It’s not. It’s doing something sort of different that looks the same on the surface. And every so often you see the cracks.”
  • Watson doesn’t understand relevance at all. It only measures statistical frequencies. Because it is relatively common to find mismatches of this sort, Watson learns to weigh them as only mild evidence against the answer. But the human just doesn’t do it that way. The human being sees immediately that the mismatch is irrelevant for the Erie Canal but essential for Toronto. Past frequency is simply no guide to relevance.
  • The fact is, things are relevant for human beings because at root we are beings for whom things matter. Relevance and mattering are two sides of the same coin. As Haugeland said, “The problem with computers is that they just don’t give a damn.” It is easy to pretend that computers can care about something if we focus on relatively narrow domains — like trivia games or chess — where by definition winning the game is the only thing that could matter, and the computer is programmed to win. But precisely because the criteria for success are so narrowly defined in these cases, they have nothing to do with what human beings are when they are at their best.
  • Far from being the paradigm of intelligence, therefore, mere matching with no sense of mattering or relevance is barely any kind of intelligence at all. As beings for whom the world already matters, our central human ability is to be able to see what matters when.
  • But, as we show in our recent book, this is an existential achievement orders of magnitude more amazing and wonderful than any statistical treatment of bare facts could ever be. The greatest danger of Watson’s victory is not that it proves machines could be better versions of us, but that it tempts us to misunderstand ourselves as poorer versions of them.
lenaurick

IQ can predict your risk of death, and 8 other smart facts about intelligence - Vox - 0 views

  • But according to Stuart Ritchie, an intelligence researcher at the University of Edinburgh, there's a massive amount of data showing that it's one of the best predictors of someone's longevity, health, and prosperity
  • In a new book, Intelligence: All that Matters, Ritchie persuasively argues that IQ doesn't necessarily set the limit for what we can do, but it does give us a starting point
  • Most people you meet are probably average, and a few are extraordinarily smart. Just 2.2 percent have an IQ of 130 or greate
  • ...17 more annotations...
  • "The classic finding — I would say it is the most replicated finding in psychology — is that people who are good at one type of mental task tend to be good at them all,"
  • G-factor is real in the sense it can predict outcomes in our lives — how much money you'll make, how productive of a worker you might be, and, most chillingly, how likely you are to die an earlier death.
  • According to the research, people with high IQs tend to be healthier and live longer than the rest of us
  • One is the fact that people with higher IQs tend to make more money than people with lower scores. Money is helpful in maintaining weight, nutrition, and accessing good health care.
  • IQ often beats personality when it comes to predicting life outcomes: Personality traits, a recent study found, can explain about 4 percent of the variance in test scores for students under age 16. IQ can explain 25 percent, or an even higher proportion, depending on the study.
  • Many of these correlations are less than .5, which means there's plenty of room for individual differences. So, yes, very smart people who are awful at their jobs exist. You're just less likely to come across them.
  • The correlation between IQ and happiness is usually positive, but also usually smaller than one might expect (and sometimes not statistically significant)," Ritchie says.
  • It could also be that people with higher IQs are smart enough to avoid accidents and mishaps. There's actually some evidence to support this: Higher-IQ people are less likely to die in traffic accidents.
  • Even though intelligence generally declines with age, those who had high IQs as children were most likely to retain their smarts as very old people.
  • "If we know the genes related to intelligence — and we know these genes are related to cognitive decline as well — then we can start to a predict who is going to have the worst cognitive decline, and devote health care medical resources to them," he says.
  • Studies comparing identical and fraternal twins find about half of IQ can be explained by genetics.
  • genetics seems to become more predictive of IQ with age.
  • The idea is as we age, we grow more in control of our environments. Those environments we create can then "amplify" the potential of our genes.
  • About half the variability in IQ is attributed to the environment. Access to nutrition, education, and health care appear to play a big role.
  • People’s lives are really messy, and the environments they are in are messy. There’s a possibility that a lot of the environmental effect on a person’s intelligence is random."
  • Hurray! Mean IQ scores appear to be increasing between 2 and 3 points per decade.
  • This phenomenon is know as the Flynn effect, and it is likely the result of increasing quality of childhood nutrition, health care, and education.
sissij

How Intelligent Are Psychopaths? Study Examines the "Hannibal Lecter Myth" | Big Think - 0 views

  • We tend to think of psychopaths as dangerous, antisocial, lacking in key human emotions like empathy or remorse.
  • finding that whatever qualities they might have, high intelligence is not one of them. In fact, psychopaths were found to be less intelligent than average people.
  • referred by psychologists as the “Hannibal Lecter myth”. But that kind of Hollywoodized psychopath did not sit well with observed facts. 
  • ...3 more annotations...
  • The researchers found that psychopaths scored lower on intelligence tests. A surprising result, according to Boutwell. 
  • The researchers hope that their finding will contribute to our understanding psychopathy, currently an untreatable condition. 
  • Further research might also change how psychopaths are treated by the criminal justice system. 
  •  
    We tends to give labels and stereotypes to a group of people, and the psychopaths is just another group of people that has been incorrectly depicted. The social media plays a big part in this case. The Hollywood creates psychopaths as highly intelligent evil genius to make the characters and storyline more interesting. However, it is far from the actual fact, but the viewers and the general public take this image as a reality. Since people tend to be lazy, they eager to receive information rather than find the information themselves. Due to the exposure effect, the image of psychopaths being evil but highly intelligent becomes the mainstream impression on psychopaths. Correcting the image of psychopaths in our head can help us better understand their behaviors without making unfair judgments. --Sissi (2/9/2017)
Javier E

New Statesman - All machine and no ghost? - 0 views

  • More subtly, there are many who insist that consciousness just reduces to brain states - a pang of regret, say, is just a surge of chemicals across a synapse. They are collapsers rather than deniers. Though not avowedly eliminative, this kind of view is tacitly a rejection of the very existence of consciousness
  • The dualist, by contrast, freely admits that consciousness exists, as well as matter, holding that reality falls into two giant spheres. There is the physical brain, on the one hand, and the conscious mind, on the other: the twain may meet at some point but they remain distinct entities.
  • Dualism makes the mind too separate, thereby precluding intelligible interaction and dependence.
  • ...11 more annotations...
  • At this point the idealist swooshes in: ladies and gentlemen, there is nothing but mind! There is no problem of interaction with matter because matter is mere illusion
  • idealism has its charms but taking it seriously requires an antipathy to matter bordering on the maniacal. Are we to suppose that material reality is just a dream, a baseless fantasy, and that the Big Bang was nothing but the cosmic spirit having a mental sneezing fit?
  • pan­psychism: even the lowliest of material things has a streak of sentience running through it, like veins in marble. Not just parcels of organic matter, such as lizards and worms, but also plants and bacteria and water molecules and even electrons. Everything has its primitive feelings and minute allotment of sensation.
  • The trouble with panpsychism is that there just isn't any evidence of the universal distribution of consciousness in the material world.
  • it occurred to me that the problem might lie not in nature but in ourselves: we just don't have the faculties of comprehension that would enable us to remove the sense of mystery. Ontologically, matter and consciousness are woven intelligibly together but epistemologically we are precluded from seeing how. I used Noam Chomsky's notion of "mysteries of nature" to describe the situation as I saw it. Soon, I was being labelled (by Owen Flanagan) a "mysterian"
  • The more we know of the brain, the less it looks like a device for creating consciousness: it's just a big collection of biological cells and a blur of electrical activity - all machine and no ghost.
  • mystery is quite pervasive, even in the hardest of sciences. Physics is a hotbed of mystery: space, time, matter and motion - none of it is free of mysterious elements. The puzzles of quantum theory are just a symptom of this widespread lack of understanding
  • The human intellect grasps the natural world obliquely and glancingly, using mathematics to construct abstract representations of concrete phenomena, but what the ultimate nature of things really is remains obscure and hidden. How everything fits together is particularly elusive, perhaps reflecting the disparate cognitive faculties we bring to bear on the world (the senses, introspection, mathematical description). We are far from obtaining a unified theory of all being and there is no guarantee that such a theory is accessible by finite human intelligence.
  • real naturalism begins with a proper perspective on our specifically human intelligence. Palaeoanthropologists have taught us that the human brain gradually evolved from ancestral brains, particularly in concert with practical toolmaking, centring on the anatomy of the human hand. This history shaped and constrained the form of intelligence now housed in our skulls (as the lifestyle of other species form their set of cognitive skills). What chance is there that an intelligence geared to making stone tools and grounded in the contingent peculiarities of the human hand can aspire to uncover all the mysteries of the universe? Can omniscience spring from an opposable thumb? It seems unlikely, so why presume that the mysteries of consciousness will be revealed to a thumb-shaped brain like ours?
  • The "mysterianism" I advocate is really nothing more than the acknowledgment that human intelligence is a local, contingent, temporal, practical and expendable feature of life on earth - an incremental adaptation based on earlier forms of intelligence that no one would reg
  • rd as faintly omniscient. The current state of the philosophy of mind, from my point of view, is just a reflection of one evolutionary time-slice of a particular bipedal species on a particular humid planet at this fleeting moment in cosmic history - as is everything else about the human animal. There is more ignorance in it than knowledge.
dpittenger

Elon Musk, Stephen Hawking warn of artificial intelligence dangers - 0 views

  • Call it preemptive extinction panic, smart people buying into Sci-Fi hype or simply a prudent stance on a possible future issue, but the fear around artificial intelligence is increasingly gaining traction among those with credentials to back up the distress.
  • However, history doesn't always neatly fit into our forecasts. If things continue as they have with brain-to-machine interfaces becoming ever more common, we're just as likely to have to confront the issue of enhanced humans (digitally, mechanically and/or chemically) long before AI comes close to sentience.
  • Still, whether or not you believe computers will one day be powerful enough to go off and find their own paths, which may conflict with humanity's, the very fact that so many intelligent people feel the issue is worth a public stance should be enough to grab your attention.
  •  
    Stephen Hawking and Elon Musk fear that artificial intelligence could become dangerous. We talked about this a bit in class before, but it is starting to become a new fear. Artificial intelligence could possibly become smarter than us, and that wouldn't be good.
Javier E

Why the very concept of 'general knowledge' is under attack | Times2 | The Times - 0 views

  • why has University Challenge lasted, virtually unchanged, for so long?
  • The answer may lie in a famous theory about our brains put forward by the psychologist Raymond Cattell in 1963
  • Cattell divided intelligence into two categories: fluid and crystallised. Fluid intelligence refers to basic reasoning and other mental activities that require minimal learning — just an alert and flexible brain.
  • ...12 more annotations...
  • By contrast, crystallised intelligence is based on experience and the accumulation of knowledge. Fluid intelligence peaks at the age of about 20 then gradually declines, whereas crystallised intelligence grows through your life until you hit your mid-sixties, when you start forgetting things.
  • that explains much about University Challenge’s appeal. Because the contestants are mostly aged around 20 and very clever, their fluid intelligence is off the scale
  • On the other hand, because they have had only 20 years to acquire crystallised intelligence, their store of general knowledge is likely to be lacking in some areas.
  • In each episode there will be questions that older viewers can answer, thanks to their greater store of crystallised intelligence, but the students cannot. Therefore we viewers don’t feel inferior when confronted by these smart young people. On the contrary: we feel, in some areas, slightly superior.
  • The first comprises the deconstructionists and decolonialists
  • It’s a brilliantly balanced format
  • They argue that all knowledge is contextual and that things taken for granted in the past — for instance, a canon of great authors that everyone should read at school — merely reflect an outdated, usually Eurocentric view of what’s intellectually important.
  • there is a real threat to the future of University Challenge and much else of value in our society, and it is this. The very concept of “general knowledge” — of a widely accepted core of information that educated, inquisitive people should have in their memory banks — is under attack from two different groups.
  • The other group is the technocrats who argue that the extent of human knowledge is now so vast that it’s impossible for any individual to know more than, perhaps, one billionth of it
  • So why not leave it entirely to computers to do the heavy lifting of knowledge storing and recall, thus freeing our minds for creativity and problem solving?
  • The problem with the agitators on both sides of today’s culture wars is that they are forcefully trying to shape what’s accepted as general knowledge according to a blatant political agenda.
  • And the problem with relying on, say, Wikipedia’s 6.5 million English-language articles to store general knowledge for all of us? It’s the tacit implication that “mere facts” are too tedious to be clogging up our brains. From there it’s a short step to saying that facts don’t matter at all, that everything should be decided by “feelings”. And from there it’s an even shorter step to fake news and pernicious conspiracy theories, the belittling of experts and hard evidence, the closing of minds, the thickening of prejudice and the trivialisation of the national conversation.
Javier E

What Happened Before the Big Bang? The New Philosophy of Cosmology - Ross Andersen - Te... - 1 views

  • This question of accounting for what we call the "big bang state" -- the search for a physical explanation of it -- is probably the most important question within the philosophy of cosmology, and there are a couple different lines of thought about it.
  • One that's becoming more and more prevalent in the physics community is the idea that the big bang state itself arose out of some previous condition, and that therefore there might be an explanation of it in terms of the previously existing dynamics by which it came about
  • There are other ideas, for instance that maybe there might be special sorts of laws, or special sorts of explanatory principles, that would apply uniquely to the initial state of the universe.
  • ...9 more annotations...
  • One common strategy for thinking about this is to suggest that what we used to call the whole universe is just a small part of everything there is, and that we live in a kind of bubble universe, a small region of something much larger
  • Newton realized there had to be some force holding the moon in its orbit around the earth, to keep it from wandering off, and he knew also there was a force that was pulling the apple down to the earth. And so what suddenly struck him was that those could be one and the same thing, the same force
  • That was a physical discovery, a physical discovery of momentous importance, as important as anything you could ever imagine because it knit together the terrestrial realm and the celestial realm into one common physical picture. It was also a philosophical discovery in the sense that philosophy is interested in the fundamental natures of things.
  • The problem is that quantum mechanics was developed as a mathematical tool. Physicists understood how to use it as a tool for making predictions, but without an agreement or understanding about what it was telling us about the physical world. And that's very clear when you look at any of the foundational discussions. This is what Einstein was upset about; this is what Schrodinger was upset about. Quantum mechanics was merely a calculational technique that was not well understood as a physical theory. Bohr and Heisenberg tried to argue that asking for a clear physical theory was something you shouldn't do anymore. That it was something outmoded. And they were wrong, Bohr and Heisenberg were wrong about that. But the effect of it was to shut down perfectly legitimate physics questions within the physics community for about half a century. And now we're coming out of that
  • The basic philosophical question, going back to Plato, is "What is x?" What is virtue? What is justice? What is matter? What is time? You can ask that about dark energy - what is it? And it's a perfectly good question.
  • right now there are just way too many freely adjustable parameters in physics. Everybody agrees about that. There seem to be many things we call constants of nature that you could imagine setting at different values, and most physicists think there shouldn't be that many, that many of them are related to one another. Physicists think that at the end of the day there should be one complete equation to describe all physics, because any two physical systems interact and physics has to tell them what to do. And physicists generally like to have only a few constants, or parameters of nature. This is what Einstein meant when he famously said he wanted to understand what kind of choices God had --using his metaphor-- how free his choices were in creating the universe, which is just asking how many freely adjustable parameters there are. Physicists tend to prefer theories that reduce that number
  • You have others saying that time is just an illusion, that there isn't really a direction of time, and so forth. I myself think that all of the reasons that lead people to say things like that have very little merit, and that people have just been misled, largely by mistaking the mathematics they use to describe reality for reality itself. If you think that mathematical objects are not in time, and mathematical objects don't change -- which is perfectly true -- and then you're always using mathematical objects to describe the world, you could easily fall into the idea that the world itself doesn't change, because your representations of it don't.
  • physicists for almost a hundred years have been dissuaded from trying to think about fundamental questions. I think most physicists would quite rightly say "I don't have the tools to answer a question like 'what is time?' - I have the tools to solve a differential equation." The asking of fundamental physical questions is just not part of the training of a physicist anymore.
  • The question remains as to how often, after life evolves, you'll have intelligent life capable of making technology. What people haven't seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It's not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that's not true. Obviously it doesn't matter that much if you're a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there's a high probability that evolution on another planet would lead to technological intelligence.
douglasn89

The Simple Economics of Machine Intelligence - 0 views

  • The year 1995 was heralded as the beginning of the “New Economy.” Digital communication was set to upend markets and change everything. But economists by and large didn’t buy into the hype.
  • Today we are seeing similar hype about machine intelligence. But once again, as economists, we believe some simple rules apply. Technological revolutions tend to involve some important activity becoming cheap, like the cost of communication or finding information. Machine intelligence is, in its essence, a prediction technology, so the economic shift will center around a drop in the cost of prediction.
  • The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.
    • douglasn89
       
      This emphasis on prediction ties into the previous discussion and reading we had which included the idea that humans by nature are poor predictors, so because of that, they have begun to design machines to predict.
  • ...4 more annotations...
  • As machine intelligence lowers the cost of prediction, we will begin to use it as an input for things for which we never previously did. As a historical example, consider semiconductors, an area of technological advance that caused a significant drop in the cost of a different input: arithmetic. With semiconductors we could calculate cheaply, so activities for which arithmetic was a key input, such as data analysis and accounting, became much cheaper.
  • As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic.
  • Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.
  • But it yields two key implications: 1) an expanded role of prediction as an input to more goods and services, and 2) a change in the value of other inputs, driven by the extent to which they are complements to or substitutes for prediction. These changes are coming.
    • douglasn89
       
      This article agrees with the readings from Unit 5 Lesson 6 in its prediction of changes.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
oliviaodon

Why Are Some People So Smart? The Answer Could Spawn a Generation of Superbabies | WIRED - 0 views

  • use those machines to examine the genetic underpinnings of genius like his own. He wants nothing less than to crack the code for intelligence by studying the genomes of thousands of prodigies, not just from China but around the world.
  • fully expect they will succeed in identifying a genetic basis for IQ. They also expect that within a decade their research will be used to screen embryos during in vitro fertilization, boosting the IQ of unborn children by up to 20 points. In theory, that’s the difference between a kid who struggles through high school and one who sails into college.
  • studies make it clear that IQ is strongly correlated with the ability to solve all sorts of abstract problems, whether they involve language, math, or visual patterns. The frightening upshot is that IQ remains by far the most powerful predictor of the life outcomes that people care most about in the modern world. Tell me your IQ and I can make a decently accurate prediction of your occupational attainment, how many kids you’ll have, your chances of being arrested for a crime, even how long you’ll live.
  • ...6 more annotations...
  • Dozens of popular books by nonexperts have filled the void, many claiming that IQ—which after more than a century remains the dominant metric for intelligence—predicts nothing important or that intelligence is simply too complex and subtle to be measured.
  • evidence points toward a strong genetic component in IQ. Based on studies of twins, siblings, and adoption, contemporary estimates put the heritability of IQ at 50 to 80 percent
  • intelligence has a genetic recipe
  • “Do you know any Perl?” Li asked him. Perl is a programming language often used to analyze genomic data. Zhao admitted he did not; in fact, he had no programming skills at all. Li handed him a massive textbook, Programming Perl. There were only two weeks left in the camp, so this would get rid of the kid for good. A few days later, Zhao returned. “I finished it,” he said. “The problems are kind of boring. Do you have anything harder?” Perl is a famously complicated language that takes university students a full year to learn.
  • So Li gave him a large DNA data set and a complicated statistical problem. That should do it. But Zhao returned later that day. “Finished.” Not only was it finished—and correct—but Zhao had even built a slick interface on top of the data.
  • driven by a fascination with kids who are born smart; he wants to know what makes them—and by extension, himself—the way they are.
  •  
    This is a really interesting article about using science to improve intelligence.
Javier E

Bill Gates on dangers of artificial intelligence: 'I don't understand why some people a... - 0 views

  • "I am in the camp that is concerned about super intelligence," Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
Javier E

I.Q. Points for Sale, Cheap - NYTimes.com - 1 views

  • Until recently, the overwhelming consensus in psychology was that intelligence was essentially a fixed trait. But in 2008, an article by a group of researchers led by Susanne Jaeggi and Martin Buschkuehl challenged this view and renewed many psychologists’ enthusiasm about the possibility that intelligence was trainable — with precisely the kind of tasks that are now popular as games.
  • it’s important to explain why we’re not sold on the idea.
  • There have been many attempts to demonstrate large, lasting gains in intelligence through educational interventions, with few successes. When gains in intelligence have been achieved, they have been modest and the result of many years of effort.
  • ...3 more annotations...
  • Web site PsychFileDrawer.org, which was founded as an archive for failed replication attempts in psychological research, maintains a Top 20 list of studies that its users would like to see replicated. The Jaeggi study is currently No. 1.
  • Another reason for skepticism is a weakness in the Jaeggi study’s design: it included only a single test of reasoning to measure gains in intelligence.
  • Demonstrating that subjects are better on one reasoning test after cognitive training doesn’t establish that they’re smarter. It merely establishes that they’re better on one reasoning test.
charlottedonoho

How Technology Can Help Language Learning | Suren Ramasubbu - 0 views

  • Intelligence, according to Gardner, is of eight types - verbal-linguistic, logical-mathematical, musical-rhythmic, visual-spatial, bodily-kinesthetic, interpersonal, intrapersonal, and naturalistic; existential and moral intelligence were added as afterthoughts in the definition of Intelligence. This is the first in a series of posts that explore and understand how each of the above forms of intelligence is affected by technology-mediated education.
  • Verbal-linguistic Intelligence involves sensitivity to spoken and written language, the ability to learn languages, and the capacity to use language to accomplish goals. Such intelligence is fostered by three specific activities: reading, writing and interpersonal communication - both written and oral.
  • Technology allows addition of multisensory elements that provide meaningful contexts to facilitate comprehension, thus expanding the learning ground of language and linguistics.
  • ...8 more annotations...
  • Research into the effect of technology on the development of the language and literacy skills vis-à-vis reading activities of children has offered evidence for favorable effects of digital-form books.
  • E-books are also being increasingly used to teach reading among beginners and children with reading difficulties.
  • Technology can be used to improve reading ability in many ways. It can enhance and sustain the interest levels for digitial natives by allowing immediate feedback on performance and providing added practice when necessary.
  • Technology can also help in improvement of writing skills. Word processing software promotes not only composition but also editing and revising in ways that streamline the task of writing.
  • However, the web cannot be discounted as being "bad for language", considering that it also offers very useful tools such as blogging and microblogging that can help the student improve her writing skills with dynamic feedback. The possibility of incorporating other media into a written document (e.g. figures, graphics, videos etc.) can enhance the joy of writing using technology.
  • Technology enhanced oral communication is indeed useful in that it allows students from remote locations, or from all over the world to communicate orally through video and audio conferencing tools.
  • As with anything to do with technology, there are also detractors who propose negative influence of features like animation, sound, music and other multimedia effects possible in digital media, which may distract young readers from the story content.
  • Such complaints notwithstanding, the symbiotic ties between linguistics and technology cannot be ignored.
kushnerha

BBC - Future - The surprising downsides of being clever - 0 views

  • If ignorance is bliss, does a high IQ equal misery? Popular opinion would have it so. We tend to think of geniuses as being plagued by existential angst, frustration, and loneliness. Think of Virginia Woolf, Alan Turing, or Lisa Simpson – lone stars, isolated even as they burn their brightest. As Ernest Hemingway wrote: “Happiness in intelligent people is the rarest thing I know.”
  • Combing California’s schools for the creme de la creme, he selected 1,500 pupils with an IQ of 140 or more – 80 of whom had IQs above 170. Together, they became known as the “Termites”, and the highs and lows of their lives are still being studied to this day.
  • Termites’ average salary was twice that of the average white-collar job. But not all the group met Terman’s expectations – there were many who pursued more “humble” professions such as police officers, seafarers, and typists. For this reason, Terman concluded that “intellect and achievement are far from perfectly correlated”. Nor did their smarts endow personal happiness. Over the course of their lives, levels of divorce, alcoholism and suicide were about the same as the national average.
  • ...16 more annotations...
  • One possibility is that knowledge of your talents becomes something of a ball and chain. Indeed, during the 1990s, the surviving Termites were asked to look back at the events in their 80-year lifespan. Rather than basking in their successes, many reported that they had been plagued by the sense that they had somehow failed to live up to their youthful expectations.
  • The most notable, and sad, case concerns the maths prodigy Sufiah Yusof. Enrolled at Oxford University aged 12, she dropped out of her course before taking her finals and started waitressing. She later worked as a call girl, entertaining clients with her ability to recite equations during sexual acts.
  • Another common complaint, often heard in student bars and internet forums, is that smarter people somehow have a clearer vision of the world’s failings. Whereas the rest of us are blinkered from existential angst, smarter people lay awake agonising over the human condition or other people’s folly.
  • MacEwan University in Canada found that those with the higher IQ did indeed feel more anxiety throughout the day. Interestingly, most worries were mundane, day-to-day concerns, though; the high-IQ students were far more likely to be replaying an awkward conversation, than asking the “big questions”. “It’s not that their worries were more profound, but they are just worrying more often about more things,” says Penney. “If something negative happened, they thought about it more.”
  • seemed to correlate with verbal intelligence – the kind tested by word games in IQ tests, compared to prowess at spatial puzzles (which, in fact, seemed to reduce the risk of anxiety). He speculates that greater eloquence might also make you more likely to verbalise anxieties and ruminate over them. It’s not necessarily a disadvantage, though. “Maybe they were problem-solving a bit more than most people,” he says – which might help them to learn from their mistakes.
  • The harsh truth, however, is that greater intelligence does not equate to wiser decisions; in fact, in some cases it might make your choices a little more foolish.
  • we need to turn our minds to an age-old concept: “wisdom”. His approach is more scientific that it might at first sound. “The concept of wisdom has an ethereal quality to it,” he admits. “But if you look at the lay definition of wisdom, many people would agree it’s the idea of someone who can make good unbiased judgement.”
  • “my-side bias” – our tendency to be highly selective in the information we collect so that it reinforces our previous attitudes. The more enlightened approach would be to leave your assumptions at the door as you build your argument – but Stanovich found that smarter people are almost no more likely to do so than people with distinctly average IQs.
  • People who ace standard cognitive tests are in fact slightly more likely to have a “bias blind spot”. That is, they are less able to see their own flaws, even when though they are quite capable of criticising the foibles of others. And they have a greater tendency to fall for the “gambler’s fallacy”
  • A tendency to rely on gut instincts rather than rational thought might also explain why a surprisingly high number of Mensa members believe in the paranormal; or why someone with an IQ of 140 is about twice as likely to max out their credit card.
  • “The people pushing the anti-vaccination meme on parents and spreading misinformation on websites are generally of more than average intelligence and education.” Clearly, clever people can be dangerously, and foolishly, misguided.
  • spent the last decade building tests for rationality, and he has found that fair, unbiased decision-making is largely independent of IQ.
  • Crucially, Grossmann found that IQ was not related to any of these measures, and certainly didn’t predict greater wisdom. “People who are very sharp may generate, very quickly, arguments [for] why their claims are the correct ones – but may do it in a very biased fashion.”
  • employers may well begin to start testing these abilities in place of IQ; Google has already announced that it plans to screen candidates for qualities like intellectual humility, rather than sheer cognitive prowess.
  • He points out that we often find it easier to leave our biases behind when we consider other people, rather than ourselves. Along these lines, he has found that simply talking through your problems in the third person (“he” or “she”, rather than “I”) helps create the necessary emotional distance, reducing your prejudices and leading to wiser arguments.
  • If you’ve been able to rest on the laurels of your intelligence all your life, it could be very hard to accept that it has been blinding your judgement. As Socrates had it: the wisest person really may be the one who can admit he knows nothing.
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
runlai_jiang

An Introduction to Dog Intelligence and Emotion - 0 views

  • The Science of Animal CognitionOver the past several years, one of the biggest advances in our human understanding of doggie cognition has been the use of MRI machines to scan dog brains. MRI stands for magnetic resonance imaging, the process of taking an ongoing picture of what parts of the brain are lighting up through what external stimuli.Dogs, as any doggie parent knows, are highly trainable. This trainable nature makes dogs great candidates for MRI machines, unlike non-domesticated wild animals like birds or bears.
  • Do you imagine they feel something like human jealousy? Well, there’s science to back this up, too.
  • As Smart as ChildrenAnimal psychologists have clocked dog intelligence at right around that of a two to two-and-a-half year old human child. The 2009 study which examined this found that dogs can understand up to 250 words and gestures. Even more surprising, the same study found that dogs can actually count low numbers (up to five) and even do simple math.
  • ...3 more annotations...
  • Through ongoing research, McGowan has found out a lot about animal cognition and feelings. In a study done in 2015, McGowan found that a human’s presence leads to increased blood flow to a dog’s eyes, ears and paws, which means the dog is excited.
  • Dogs have been studied for their empathy, as well. A 2012 study examined dogs’ behavior towards distressed humans that weren’t their owners. While the study concluded that dogs display an empathy-like behavior, the scientists writing the re
  • Numerous other studies on dog behavior, emotion, and intelligence have found that dogs “eavesdrop” on human interactions to assess who is mean to their owner and who isn’t and that dogs follow their human’s gaze.These studies may just be the tip of the iceberg when it comes to our learning about dogs. And as for doggie parents? Well, they may know a lot more than the rest of us, just by observing their best canine companions every day.
Javier E

Opinion | Noam Chomsky: The False Promise of ChatGPT - The New York Times - 0 views

  • we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
  • OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought
  • if machine learning programs like ChatGPT continue to dominate the field of A.I
  • ...22 more annotations...
  • , we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
  • It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
  • The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question
  • the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations
  • such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case
  • Those are the ingredients of explanation, the mark of true intelligence.
  • Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.”
  • an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
  • The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws
  • any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered.
  • ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.
  • Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
  • For this reason, the predictions of machine learning systems will always be superficial and dubious.
  • some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscienc
  • While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
  • The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
  • This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)
  • True intelligence is also capable of moral thinking
  • To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content
  • In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
  • Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
  • In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
1 - 20 of 322 Next › Last »
Showing 20 items per page