Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged computer

Rss Feed Group items tagged

Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Digital Domain - Computers at Home - Educational Hope vs. Teenage Reality - NYTimes.com - 0 views

  • MIDDLE SCHOOL students are champion time-wasters. And the personal computer may be the ultimate time-wasting appliance.
  • there is an automatic inclination to think of the machine in its most idealized form, as the Great Equalizer. In developing countries, computers are outfitted with grand educational hopes, like those that animate the One Laptop Per Child initiative, which was examined in this space in April.
  • Economists are trying to measure a home computer’s educational impact on schoolchildren in low-income households. Taking widely varying routes, they are arriving at similar conclusions: little or no educational benefit is found. Worse, computers seem to have further separated children in low-income households, whose test scores often decline after the machine arrives, from their more privileged counterparts.
  • ...5 more annotations...
  • Professor Malamud and his collaborator, Cristian Pop-Eleches, an assistant professor of economics at Columbia University, did their field work in Romania in 2009, where the government invited low-income families to apply for vouchers worth 200 euros (then about $300) that could be used for buying a home computer. The program provided a control group: the families who applied but did not receive a voucher.
  • the professors report finding “strong evidence that children in households who won a voucher received significantly lower school grades in math, English and Romanian.” The principal positive effect on the students was improved computer skills.
  • few children whose families obtained computers said they used the machines for homework. What they were used for — daily — was playing games.
  • negative effect on test scores was not universal, but was largely confined to lower-income households, in which, the authors hypothesized, parental supervision might be spottier, giving students greater opportunity to use the computer for entertainment unrelated to homework and reducing the amount of time spent studying.
  • The North Carolina study suggests the disconcerting possibility that home computers and Internet access have such a negative effect only on some groups and end up widening achievement gaps between socioeconomic groups. The expansion of broadband service was associated with a pronounced drop in test scores for black students in both reading and math, but no effect on the math scores and little on the reading scores of other students.
  •  
    Computers at Home: Educational Hope vs. Teenage Reality By RANDALL STROSS Published: July 9, 2010
Weiye Loh

Short Sharp Science: Computer beats human at Japanese chess for first time - 0 views

  • A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time.
  • computers have been beating humans at western chess for years, and when IBM's Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn't happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.
  • Japan's national broadcaster, NHK, reported that Akara "aggressively pursued Shimizu from the beginning". It's the first time a computer has beaten a professional human player.
  • ...2 more annotations...
  • The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu's defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.
  • Perhaps the association doesn't mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player.
  •  
    Computer beats human at Japanese chess for first time
Weiye Loh

BrainGate gives paralysed the power of mind control | Science | The Observer - 0 views

  • brain-computer interface, or BCI
  • is a branch of science exploring how computers and the human brain can be meshed together. It sounds like science fiction (and can look like it too), but it is motivated by a desire to help chronically injured people. They include those who have lost limbs, people with Lou Gehrig's disease, or those who have been paralysed by severe spinal-cord injuries. But the group of people it might help the most are those whom medicine assumed were beyond all hope: sufferers of "locked-in syndrome".
  • These are often stroke victims whose perfectly healthy minds end up trapped inside bodies that can no longer move. The most famous example was French magazine editor Jean-Dominique Bauby who managed to dictate a memoir, The Diving Bell and the Butterfly, by blinking one eye. In the book, Bauby, who died in 1997 shortly after the book was published, described the prison his body had become for a mind that still worked normally.
  • ...9 more annotations...
  • Now the project is involved with a second set of human trials, pushing the technology to see how far it goes and trying to miniaturise it and make it wireless for a better fit in the brain. BrainGate's concept is simple. It posits that the problem for most patients does not lie in the parts of the brain that control movement, but with the fact that the pathways connecting the brain to the rest of the body, such as the spinal cord, have been broken. BrainGate plugs into the brain, picks up the right neural signals and beams them into a computer where they are translated into moving a cursor or controlling a computer keyboard. By this means, paralysed people can move a robot arm or drive their own wheelchair, just by thinking about it.
  • he and his team are decoding the language of the human brain. This language is made up of electronic signals fired by billions of neurons and it controls everything from our ability to move, to think, to remember and even our consciousness itself. Donoghue's genius was to develop a deceptively small device that can tap directly into the brain and pick up those signals for a computer to translate them. Gold wires are implanted into the brain's tissue at the motor cortex, which controls movement. Those wires feed back to a tiny array – an information storage device – attached to a "pedestal" in the skull. Another wire feeds from the array into a computer. A test subject with BrainGate looks like they have a large plug coming out the top of their heads. Or, as Donoghue's son once described it, they resemble the "human batteries" in The Matrix.
  • BrainGate's highly advanced computer programs are able to decode the neuron signals picked up by the wires and translate them into the subject's desired movement. In crude terms, it is a form of mind-reading based on the idea that thinking about moving a cursor to the right will generate detectably different brain signals than thinking about moving it to the left.
  • The technology has developed rapidly, and last month BrainGate passed a vital milestone when one paralysed patient went past 1,000 days with the implant still in her brain and allowing her to move a computer cursor with her thoughts. The achievement, reported in the prestigious Journal of Neural Engineering, showed that the technology can continue to work inside the human body for unprecedented amounts of time.
  • Donoghue talks enthusiastically of one day hooking up BrainGate to a system of electronic stimulators plugged into the muscles of the arm or legs. That would open up the prospect of patients moving not just a cursor or their wheelchair, but their own bodies.
  • If Nagle's motor cortex was no longer working healthily, the entire BrainGate project could have been rendered pointless. But when Nagle was plugged in and asked to imagine moving his limbs, the signals beamed out with a healthy crackle. "We asked him to imagine moving his arm to the left and to the right and we could hear the activity," Donoghue says. When Nagle first moved a cursor on a screen using only his thoughts, he exclaimed: "Holy shit!"
  • BrainGate and other BCI projects have also piqued the interest of the government and the military. BCI is melding man and machine like no other sector of medicine or science and there are concerns about some of the implications. First, beyond detecting and translating simple movement commands, BrainGate may one day pave the way for mind-reading. A device to probe the innermost thoughts of captured prisoners or dissidents would prove very attractive to some future military or intelligence service. Second, there is the idea that BrainGate or other BCI technologies could pave the way for robot warriors controlled by distant humans using only their minds. At a conference in 2002, a senior American defence official, Anthony Tether, enthused over BCI. "Imagine a warrior with the intellect of a human and the immortality of a machine." Anyone who has seen Terminator might worry about that.
  • Donoghue acknowledges the concerns but has little time for them. When it comes to mind-reading, current BrainGate technology has enough trouble with translating commands for making a fist, let alone probing anyone's mental secrets
  • As for robot warriors, Donoghue was slightly more circumspect. At the moment most BCI research, including BrainGate projects, that touch on the military is focused on working with prosthetic limbs for veterans who have lost arms and legs. But Donoghue thinks it is healthy for scientists to be aware of future issues. "As long as there is a rational dialogue and scientists think about where this is going and what is the reasonable use of the technology, then we are on a good path," he says.
  •  
    The robotic arm clutched a glass and swung it over a series of coloured dots that resembled a Twister gameboard. Behind it, a woman sat entirely immobile in a wheelchair. Slowly, the arm put the glass down, narrowly missing one of the dots. "She's doing that!" exclaims Professor John Donoghue, watching a video of the scene on his office computer - though the woman onscreen had not moved at all. "She actually has the arm under her control," he says, beaming with pride. "We told her to put the glass down on that dot." The woman, who is almost completely paralysed, was using Donoghue's groundbreaking technology to control the robot arm using only her thoughts. Called BrainGate, the device is implanted into her brain and hooked up to a computer to which she sends mental commands. The video played on, giving Donoghue, a silver-haired and neatly bearded man of 62, even more reason to feel pleased. The patient was not satisfied with her near miss and the robot arm lifted the glass again. After a brief hover, the arm positioned the glass on the dot.
kenneth yang

CYBER TROOPERS MAKE ARREST FOR SEXUAL SOLICITATION OF A MINOR - 8 views

BALTIMORE, Aug. 12 -- The Maryland State Police issued the following news release: A man who had been making online plans to allegedly have sex with someone he thought was a 13-year old girl, had h...

started by kenneth yang on 18 Aug 09 no follow-up yet
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Skepticblog » Education 2.0 - 0 views

  •  
    For education 2.0 to become a reality, the use of the internet and computer technology in primary education needs to become more than an afterthought - more than just an obligatory added layer, and more than just teaching students computer skills themselves. We need a massive effort to develop a digital infrastructure dedicated to computer and internet-based learning. We need schools and teachers to experiment more, to find what computers will do best, and what they are not good for. Primarily, I think we just need the development of dedicated programs and content for education. We need the equivalent of Facebook and Twitter for primary education - killer apps, the kind that are so effective that after their incorporation people will look back and wonder what they did before the application was available.
Weiye Loh

Read the Web :: Carnegie Mellon University - 0 views

  •  
    Can computers learn to read? We think so. "Read the Web" is a research project that attempts to create a computer system that learns over time to read the web. Since January 2010, our computer system called NELL (Never-Ending Language Learner) has been running continuously, attempting to perform two tasks each day: First, it attempts to "read," or extract facts from text found in hundreds of millions of web pages (e.g., playsInstrument(George_Harrison, guitar)). Second, it attempts to improve its reading competence, so that tomorrow it can extract more facts from the web, more accurately. So far, NELL has accumulated over 15 million candidate beliefs by reading the web, and it is considering these at different levels of confidence. NELL has high confidence in 928,295 of these beliefs - these are displayed on this website. It is not perfect, but NELL is learning. You can track NELL's progress below or @cmunell on Twitter, browse and download its knowledge base, read more about our technical approach, or join the discussion group.
Weiye Loh

The Secret Ingredient in Computational Creativity | MIT Technology Review - 0 views

  •  
    IBM has built a computational creativity machine that creates entirely new and useful stuff from its knowledge of existing stuff. And the secret sauce in all this? Big data, say the computer scientists behind it.
Weiye Loh

Your Brain on Computers - Attached to Technology and Paying a Price - NYTimes.com - 0 views

  • The message had slipped by him amid an electronic flood: two computer screens alive with e-mail, instant messages, online chats, a Web browser and the computer code he was writing. (View an interactive panorama of Mr. Campbell's workstation.)
  • Even after he unplugs, he craves the stimulation he gets from his electronic gadgets. He forgets things like dinner plans, and he has trouble focusing on his family.
  • “It seems like he can no longer be fully in the moment.”
  • ...4 more annotations...
  • juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.
  • These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive.
  • While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.
  • even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers.
  •  
    YOUR BRAIN ON COMPUTERS Hooked on Gadgets, and Paying a Mental Price
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

Hacktivists as Gadflies - NYTimes.com - 0 views

  •  
    "Consider the case of Andrew Auernheimer, better known as "Weev." When Weev discovered in 2010 that AT&T had left private information about its customers vulnerable on the Internet, he and a colleague wrote a script to access it. Technically, he did not "hack" anything; he merely executed a simple version of what Google Web crawlers do every second of every day - sequentially walk through public URLs and extract the content. When he got the information (the e-mail addresses of 114,000 iPad users, including Mayor Michael Bloomberg and Rahm Emanuel, then the White House chief of staff), Weev did not try to profit from it; he notified the blog Gawker of the security hole. For this service Weev might have asked for free dinners for life, but instead he was recently sentenced to 41 months in prison and ordered to pay a fine of more than $73,000 in damages to AT&T to cover the cost of notifying its customers of its own security failure. When the federal judge Susan Wigenton sentenced Weev on March 18, she described him with prose that could have been lifted from the prosecutor Meletus in Plato's "Apology." "You consider yourself a hero of sorts," she said, and noted that Weev's "special skills" in computer coding called for a more draconian sentence. I was reminded of a line from an essay written in 1986 by a hacker called the Mentor: "My crime is that of outsmarting you, something that you will never forgive me for." When offered the chance to speak, Weev, like Socrates, did not back down: "I don't come here today to ask for forgiveness. I'm here to tell this court, if it has any foresight at all, that it should be thinking about what it can do to make amends to me for the harm and the violence that has been inflicted upon my life." He then went on to heap scorn upon the law being used to put him away - the Computer Fraud and Abuse Act, the same law that prosecutors used to go after the 26-year-old Internet activist Aaron Swart
Weiye Loh

Connecticut Law Tribune: Child Porn Decision Turns On Downloading Intent - 0 views

  •  
    Generally speaking, when you go to a web site, images are downloaded to temporary storage on your computer - whether it's a personal computer, pad, laptop or certain smartphones. This temporary storage is called "cache." The pictures and video are temporarily stored to make it easier for your computer to display those images from the web site if you go back. It makes the processing time faster. This is an automatic process conducted by your computer's operating system. Yes, that means you or a client can accidentally access child pornography unknowingly. There may be pictures or videos that depict child pornography that you haven't viewed that get automatically downloaded and stored in temporary Internet storage or cache. Yes, that means that even if you or a client accidentally access child pornography and try to delete it, if the police find out about it, they will make an arrest, push to prosecute and the resultant conviction will garner a mandatory minimum sentence of incarceration.
Weiye Loh

Some Scientists Fear Computer Chips Will Soon Hit a Wall - NYTimes.com - 0 views

  • The problem has the potential to counteract an important principle in computing that has held true for decades: Moore’s Law. It was Gordon Moore, a founder of Intel, who first predicted that the number of transistors that could be nestled comfortably and inexpensively on an integrated circuit chip would double roughly every two years, bringing exponential improvements in consumer electronics.
  • In their paper, Dr. Burger and fellow researchers simulated the electricity used by more than 150 popular microprocessors and estimated that by 2024 computing speed would increase only 7.9 times, on average. By contrast, if there were no limits on the capabilities of the transistors, the maximum potential speedup would be nearly 47 times, the researchers said.
  • Some scientists disagree, if only because new ideas and designs have repeatedly come along to preserve the computer industry’s rapid pace of improvement. Dr. Dally of Nvidia, for instance, is sanguine about the future of chip design. “The good news is that the old designs are really inefficient, leaving lots of room for innovation,” he said.
  • ...3 more annotations...
  • Shekhar Y. Borkar, a fellow at Intel Labs, called Dr. Burger’s analysis “right on the dot,” but added: “His conclusions are a little different than what my conclusions would have been. The future is not as golden as it used to be, but it’s not bleak either.” Dr. Borkar cited a variety of new design ideas that he said would help ease the limits identified in the paper. Intel recently developed a way to vary the power consumed by different parts of a processor, making it possible to have both slower, lower-power transistors as well as faster-switching ones that consume more power. Increasingly, today’s processor chips contain two or more cores, or central processing units, that make it possible to use multiple programs simultaneously. In the future, Intel computers will have different kinds of cores optimized for different kinds of problems, only some of which require high power.
  • And while Intel announced in May that it had found a way to use 3-D design to crowd more transistors onto a single chip, that technology does not solve the energy problem described in the paper about dark silicon. The authors of the paper said they had tried to account for some of the promised innovation, and they argued that the question was how far innovators could go in overcoming the power limits.
  • “It’s one of those ‘If we don’t innovate, we’re all going to die’ papers,” Dr. Patterson said in an e-mail. “I’m pretty sure it means we need to innovate, since we don’t want to die!”
Weiye Loh

Technology and Inequality - Kenneth Rogoff - Project Syndicate - 0 views

  • it is easy to forget that market forces, if allowed to play out, might eventually exert a stabilizing role. Simply put, the greater the premium for highly skilled workers, the greater the incentive to find ways to economize on employing their talents.
  • one of the main ways to uncover cheating is by using a computer program to detect whether a player’s moves consistently resemble the favored choices of various top computer programs.
  • many other examples of activities that were once thought exclusively the domain of intuitive humans, but that computers have come to dominate. Many teachers and schools now use computer programs to scan essays for plagiarism
  • ...4 more annotations...
  • computer-grading of essays is a surging science, with some studies showing that computer evaluations are fairer, more consistent, and more informative than those of an average teacher, if not necessarily of an outstanding one.
  • the relative prices of grains, metals, and many other basic goods tended to revert to a central mean tendency over sufficiently long periods. We conjectured that even though random discoveries, weather events, and technologies might dramatically shift relative values for certain periods, the resulting price differentials would create incentives for innovators to concentrate more attention on goods whose prices had risen dramatically.
  • people are not goods, but the same principles apply. As skilled labor becomes increasingly expensive relative to unskilled labor, firms and businesses have a greater incentive to find ways to “cheat” by using substitutes for high-price inputs. The shift might take many decades, but it also might come much faster as artificial intelligence fuels the next wave of innovation.
  • Many commentators seem to believe that the growing gap between rich and poor is an inevitable byproduct of increasing globalization and technology. In their view, governments will need to intervene radically in markets to restore social balance. I disagree. Yes, we need genuinely progressive tax systems, respect for workers’ rights, and generous aid policies on the part of rich countries. But the past is not necessarily prologue: given the remarkable flexibility of market forces, it would be foolish, if not dangerous, to infer rising inequality in relative incomes in the coming decades by extrapolating from recent trends.
  •  
    Until now, the relentless march of technology and globalization has played out hugely in favor of high-skilled labor, helping to fuel record-high levels of income and wealth inequality around the world. Will the endgame be renewed class warfare, with populist governments coming to power, stretching the limits of income redistribution, and asserting greater state control over economic life?
Weiye Loh

God hates hackers: Anonymous warns Westboro Baptist Church, 'stop now, or else' - 0 views

  • Vigilante “hacktivist” group Anonymous has a new target: Westboro Baptist Church. In an open letter to the notorious Kansas-based church, Anonymous promises “vicious” retaliation against the organization if they do not “cease & desist” their protest activities.
  • Led by pastor Fred Phelps, Westboro Baptist has become infamous for picketing the funerals of US soldiers — events know as “Love Crusades” — and for their display of signs bearing inflammatory messages, like “God hates fags.” The church has long argued that their Constitutionally-protected right to freedom of speech allows them to continue their derogatory brand of social activism.
  • Anonymous also considers itself an “aggressive proponent” of free speech, having recently launched attacks on organizations they consider to be enemies of that right: Companies like PayPal, Visa and Master Card, who stopped processing donations to WikiLeaks after the anti-secrecy organization released a massive cache of US embassy cables; and the government of Egypt, which attempted to cut off its
  • ...1 more annotation...
  • Other Anonymous targets include the Church of Scientology and, most recently, cyber-security company HBGary, which attempted to infiltrate Anonymous. In response, the lose-knit hacker group released 71,800 HBGary emails, which revealed highly dubious activities by the company, almost instantaneously destroying HBGary’s reputation and potentially setting it on a path to financial ruin.
Weiye Loh

Roger Pielke Jr.'s Blog: It Is Always the Media's Fault - 0 views

  • Last summer NCAR issued a dramatic press release announcing that oil from the Gulf spill would soon be appearing on the beaches of the Atlantic ocean.  I discussed it here. Here are the first four paragraphs of that press release: BOULDER—A detailed computer modeling study released today indicates that oil from the massive spill in the Gulf of Mexico might soon extend along thousands of miles of the Atlantic coast and open ocean as early as this summer. The modeling results are captured in a series of dramatic animations produced by the National Center for Atmospheric Research (NCAR) and collaborators. he research was supported in part by the National Science Foundation, NCAR’s sponsor. The results were reviewed by scientists at NCAR and elsewhere, although not yet submitted for peer-review publication. “I’ve had a lot of people ask me, ‘Will the oil reach Florida?’” says NCAR scientist Synte Peacock, who worked on the study. “Actually, our best knowledge says the scope of this environmental disaster is likely to reach far beyond Florida, with impacts that have yet to be understood.” The computer simulations indicate that, once the oil in the uppermost ocean has become entrained in the Gulf of Mexico’s fast-moving Loop Current, it is likely to reach Florida's Atlantic coast within weeks. It can then move north as far as about Cape Hatteras, North Carolina, with the Gulf Stream, before turning east. Whether the oil will be a thin film on the surface or mostly subsurface due to mixing in the uppermost region of the ocean is not known.
  • A few weeks ago NCAR's David Hosansky who presumably wrote that press release, asks whether NCAR got it wrong.  His answer?  No, not really: During last year’s crisis involving the massive release of oil into the Gulf of Mexico, NCAR issued a much-watched animation projecting that the oil could reach the Atlantic Ocean. But detectable amounts of oil never made it to the Atlantic, at least not in an easily visible form on the ocean surface. Not surprisingly, we’ve heard from a few people asking whether NCAR got it wrong. These events serve as a healthy reminder of a couple of things: *the difference between a projection and an actual forecast *the challenges of making short-term projections of natural processes that can act chaotically, such as ocean currents
  • What then went wrong? First, the projection. Scientists from NCAR, the Department of Energy’s Los Alamos National Laboratory, and IFM-GEOMAR in Germany did not make a forecast of where the oil would go. Instead, they issued a projection. While there’s not always a clear distinction between the two, forecasts generally look only days or hours into the future and are built mostly on known elements (such as the current amount of humidity in the atmosphere). Projections tend to look further into the future and deal with a higher number of uncertainties (such as the rate at which oil degrades in open waters and the often chaotic movements of ocean currents). Aware of the uncertainties, the scientific team projected the likely path of the spill with a computer model of a liquid dye. They used dye rather than actual oil, which undergoes bacterial breakdown, because a reliable method to simulate that breakdown was not available. As it turned out, the oil in the Gulf broke down quickly due to exceptionally strong bacterial action and, to some extent, the use of chemical dispersants.
  • ...3 more annotations...
  • Second, the challenges of short-term behavior. The Gulf's Loop Current acts as a conveyor belt, moving from the Yucatan through the Florida Straits into the Atlantic. Usually, the current curves northward near the Louisiana and Mississippi coasts—a configuration that would have put it on track to pick up the oil and transport it into open ocean. However, the current’s short-term movements over a few weeks or even months are chaotic and impossible to predict. Sometimes small eddies, or mini-currents, peel off, shifting the position and strength of the main current. To determine the threat to the Atlantic, the research team studied averages of the Loop Current’s past behavior in order to simulate its likely course after the spill and ran several dozen computer simulations under various scenarios. Fortunately for the East Coast, the Loop Current did not behave in its usual fashion but instead remained farther south than usual, which kept it far from the Louisiana and Mississippi coast during the crucial few months before the oil degraded and/or was dispersed with chemical treatments.
  • The Loop Current typically goes into a southern configuration about every 6 to 19 months, although it rarely remains there for very long. NCAR scientist Synte Peacock, who worked on the projection, explains that part of the reason the current is unpredictable is “no two cycles of the Loop Current are ever exactly the same." She adds that the cycles are influenced by such variables as how large the eddy is, where the current detaches and moves south, and how long it takes for the current to reform. Computer models can simulate the currents realistically, she adds. But they cannot predict when the currents will change over to a new cycle. The scientists were careful to explain that their simulations were a suite of possible trajectories demonstrating what was likely to happen, but not a definitive forecast of what would happen. They reiterated that point in a peer-reviewed study on the simulations that appeared last August in Environmental Research Letters. 
  • So who was at fault?  According to Hosansky it was those dummies in the media: These caveats, however, got lost in much of the resulting media coverage.Another perspective is that having some of these caveats in the press release might have been a good idea.
Weiye Loh

De-Universalizing Access! Is there a Conspiracy to Electronically "Kettle" th... - 0 views

  •  
    those wishing to access and make use of government services or benefits may be quite out of luck if they can't afford in home Internet service, live in a remote area, don't own a computer and/or lack the necessary knowledge, skill, physical facility, and cognitive capacity to manage computer and Internet access and use.
Weiye Loh

Rationally Speaking: Are Intuitions Good Evidence? - 0 views

  • Is it legitimate to cite one’s intuitions as evidence in a philosophical argument?
  • appeals to intuitions are ubiquitous in philosophy. What are intuitions? Well, that’s part of the controversy, but most philosophers view them as intellectual “seemings.” George Bealer, perhaps the most prominent defender of intuitions-as-evidence, writes, “For you to have an intuition that A is just for it to seem to you that A… Of course, this kind of seeming is intellectual, not sensory or introspective (or imaginative).”2 Other philosophers have characterized them as “noninferential belief due neither to perception nor introspection”3 or alternatively as “applications of our ordinary capacities for judgment.”4
  • Philosophers may not agree on what, exactly, intuition is, but that doesn’t stop them from using it. “Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” Brian Talbot says.5 Typically, the way this works is that a philosopher challenges a theory by applying it to a real or hypothetical case and showing that it yields a result which offends his intuitions (and, he presumes, his readers’ as well).
  • ...16 more annotations...
  • For example, John Searle famously appealed to intuition to challenge the notion that a computer could ever understand language: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output)… If the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.” Should we take Searle’s intuition that such a system would not constitute “understanding” as good evidence that it would not? Many critics of the Chinese Room argument argue that there is no reason to expect our intuitions about intelligence and understanding to be reliable.
  • Ethics leans especially heavily on appeals to intuition, with a whole school of ethicists (“intuitionists”) maintaining that a person can see the truth of general ethical principles not through reason, but because he “just sees without argument that they are and must be true.”6
  • Intuitions are also called upon to rebut ethical theories such as utilitarianism: maximizing overall utility would require you to kill one innocent person if, in so doing, you could harvest her organs and save five people in need of transplants. Such a conclusion is taken as a reductio ad absurdum, requiring utilitarianism to be either abandoned or radically revised – not because the conclusion is logically wrong, but because it strikes nearly everyone as intuitively wrong.
  • British philosopher G.E. Moore used intuition to argue that the existence of beauty is good irrespective of whether anyone ever gets to see and enjoy that beauty. Imagine two planets, he said, one full of stunning natural wonders – trees, sunsets, rivers, and so on – and the other full of filth. Now suppose that nobody will ever have the opportunity to glimpse either of those two worlds. Moore concluded, “Well, even so, supposing them quite apart from any possible contemplation by human beings; still, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would."7
  • Although similar appeals to intuition can be found throughout all the philosophical subfields, their validity as evidence has come under increasing scrutiny over the last two decades, from philosophers such as Hilary Kornblith, Robert Cummins, Stephen Stich, Jonathan Weinberg, and Jaakko Hintikka (links go to representative papers from each philosopher on this issue). The severity of their criticisms vary from Weinberg’s warning that “We simply do not know enough about how intuitions work,” to Cummins’ wholesale rejection of philosophical intuition as “epistemologically useless.”
  • One central concern for the critics is that a single question can inspire totally different, and mutually contradictory, intuitions in different people.
  • For example, I disagree with Moore’s intuition that it would be better for a beautiful planet to exist than an ugly one even if there were no one around to see it. I can’t understand what the words “better” and “worse,” let alone “beautiful” and “ugly,” could possibly mean outside the domain of the experiences of conscious beings
  • If we want to take philosophers’ intuitions as reason to believe a proposition, then the existence of opposing intuitions leaves us in the uncomfortable position of having reason to believe both a proposition and its opposite.
  • “I suspect there is overall less agreement than standard philosophical practice presupposes, because having the ‘right’ intuitions is the entry ticket to various subareas of philosophy,” Weinberg says.
  • But the problem that intuitions are often not universally shared is overshadowed by another problem: even if an intuition is universally shared, that doesn’t mean it’s accurate. For in fact there are many universal intuitions that are demonstrably false.
  • People who have not been taught otherwise typically assume that an object dropped out of a moving plane will fall straight down to earth, at exactly the same latitude and longitude from which it was dropped. What will actually happen is that, because the object begins its fall with the same forward momentum it had while it was on the plane, it will continue to travel forward, tracing out a curve as it falls and not a straight line. “Considering the inadequacies of ordinary physical intuitions, it is natural to wonder whether ordinary moral intuitions might be similarly inadequate,” Princeton’s Gilbert Harman has argued,9 and the same could be said for our intuitions about consciousness, metaphysics, and so on.
  • We can’t usually “check” the truth of our philosophical intuitions externally, with an experiment or a proof, the way we can in physics or math. But it’s not clear why we should expect intuitions to be true. If we have an innate tendency towards certain intuitive beliefs, it’s likely because they were useful to our ancestors.
  • But there’s no reason to expect that the intuitions which were true in the world of our ancestors would also be true in other, unfamiliar contexts
  • And for some useful intuitions, such as moral ones, “truth” may have been beside the point. It’s not hard to see how moral intuitions in favor of fairness and generosity would have been crucial to the survival of our ancestors’ tribes, as would the intuition to condemn tribe members who betrayed those reciprocal norms. If we can account for the presence of these moral intuitions by the fact that they were useful, is there any reason left to hypothesize that they are also “true”? The same question could be asked of the moral intuitions which Jonathan Haidt has classified as “purity-based” – an aversion to incest, for example, would clearly have been beneficial to our ancestors. Since that fact alone suffices to explain the (widespread) presence of the “incest is morally wrong” intuition, why should we take that intuition as evidence that “incest is morally wrong” is true?
  • The still-young debate over intuition will likely continue to rage, especially since it’s intertwined with a rapidly growing body of cognitive and social psychological research examining where our intuitions come from and how they vary across time and place.
  • its resolution bears on the work of literally every field of analytic philosophy, except perhaps logic. Can analytic philosophy survive without intuition? (If so, what would it look like?) And can the debate over the legitimacy of appeals to intuition be resolved with an appeal to intuition?
Weiye Loh

What humans know that Watson doesn't - CNN.com - 0 views

  • One of the most frustrating experiences produced by the winter from hell is dealing with the airlines' automated answer systems. Your flight has just been canceled and every second counts in getting an elusive seat. Yet you are stuck in an automated menu spelling out the name of your destination city.
  • Even more frustrating is knowing that you will never get to ask the question you really want to ask, as it isn't an option: "If I drive to Newark and board my Flight to Tel Aviv there will you cancel my whole trip, as I haven't started from my ticketed airport of origin, Ithaca?"
  • A human would immediately understand the question and give you an answer. That's why knowledgeable travelers rush to the nearest airport when they experience a cancellation, so they have a chance to talk to a human agent who can override the computer, rather than rebook by phone (more likely wait on hold and listen to messages about how wonderful a destination Tel Aviv is) or talk to a computer.
  • ...6 more annotations...
  • There is no doubt the IBM supercomputer Watson gave an impressive performance on "Jeopardy!" this week. But I was worried by the computer's biggest fluff Tuesday night. In answer to the question about naming a U.S. city whose first airport is named after a World War II hero and its second after a World War II battle, it gave Toronto, Ontario. Not even close!
  • Both the humans on the program knew the correct answer: Chicago. Even a famously geographically challenged person like me
  • Why did I know it? Because I have spent enough time stranded at O'Hare to have visited the monument to Butch O'Hare in the terminal. Watson, who has not, came up with the wrong answer. This reveals precisely what Watson lacks -- embodiment.
  • Watson has never traveled anywhere. Humans travel, so we know all sorts of stuff about travel and airports that a computer doesn't know. It is the informal, tacit, embodied knowledge that is the hardest for computers to grasp, but it is often such knowledge that is most crucial to our lives.
  • Providing unique answers to questions limited to around 25 words is not the same as dealing with real problems of an emotionally distraught passenger in an open system where there may not be a unique answer.
  • Watson beating the pants out of us on "Jeopardy!" is fun -- rather like seeing a tractor beat a human tug-of-war team. Machines have always been better than humans at some tasks.
1 - 20 of 123 Next › Last »
Showing 20 items per page