Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Technological Embodiment

Rss Feed Group items tagged

Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Roger Pielke Jr.'s Blog: Faith-Based Education and a Return to Shop Class - 0 views

  • In the United States, nearly a half century of research, application of new technologies and development of new methods and policies has failed to translate into improved reading abilities for the nation’s children1.
  • the reasons why progress has been so uneven point to three simple rules for anticipating when more research and development (R&D) could help to yield rapid social progress. In a world of limited resources, the trick is distinguishing problems amenable to technological fixes from those that are not. Our rules provide guidance\ in making this distinction . . .
  • unlike vaccines, the textbooks and software used in education do not embody the essence of what needs to be done. That is, they don’t provide the basic ‘go’ of teaching and learning. That depends on the skills of teachers and on the attributes of classrooms and students. Most importantly, the effectiveness of a vaccine is largely independent of who gives or receives it, and of the setting in which it is given.
  • ...5 more annotations...
  • The three rules for a technological fix proposed by Sarewitz and Nelson are: I. The technology must largely embody the cause–effect relationship connecting problem to solution. II. The effects of the technological fix must be assessable using relatively unambiguous or uncontroversial criteria. III. Research and development is most likely to contribute decisively to solving a social problem when it focuses on improving a standardized technical core that already exists.
  • technology in the classroom fails with respect to each of the three criteria: (a) technology is not a causal factor in learning in the sense that more technology means more learning, (b) assessment of educational outcome sis itself difficult and contested, much less disentangling various causal factors, and (c) the lack of evidence that technology leads to improved educational outcomes means that there is no such standardized technological core.
  • This conundrum calls into question one of the most significant contemporary educational movements. Advocates for giving schools a major technological upgrade — which include powerful educators, Silicon Valley titans and White House appointees — say digital devices let students learn at their own pace, teach skills needed in a modern economy and hold the attention of a generation weaned on gadgets. Some backers of this idea say standardized tests, the most widely used measure of student performance, don’t capture the breadth of skills that computers can help develop. But they also concede that for now there is no better way to gauge the educational value of expensive technology investments.
  • absent clear proof, schools are being motivated by a blind faith in technology and an overemphasis on digital skills — like using PowerPoint and multimedia tools — at the expense of math, reading and writing fundamentals. They say the technology advocates have it backward when they press to upgrade first and ask questions later.
  • [D]emand for educated labour is being reconfigured by technology, in much the same way that the demand for agricultural labour was reconfigured in the 19th century and that for factory labour in the 20th. Computers can not only perform repetitive mental tasks much faster than human beings. They can also empower amateurs to do what professionals once did: why hire a flesh-and-blood accountant to complete your tax return when Turbotax (a software package) will do the job at a fraction of the cost? And the variety of jobs that computers can do is multiplying as programmers teach them to deal with tone and linguistic ambiguity. Several economists, including Paul Krugman, have begun to argue that post-industrial societies will be characterised not by a relentless rise in demand for the educated but by a great “hollowing out”, as mid-level jobs are destroyed by smart machines and high-level job growth slows. David Autor, of the Massachusetts Institute of Technology (MIT), points out that the main effect of automation in the computer era is not that it destroys blue-collar jobs but that it destroys any job that can be reduced to a routine. Alan Blinder, of Princeton University, argues that the jobs graduates have traditionally performed are if anything more “offshorable” than low-wage ones. A plumber or lorry-driver’s job cannot be outsourced to India.
  •  
    In 2008 Dick Nelson and Dan Sarewitz had a commentary in Nature (here in PDF) that eloquently summarized why it is that we should not expect technology in the classroom to reault in better educational outcomes as they suggest we should in the case of a tehcnology like vaccines
Weiye Loh

What humans know that Watson doesn't - CNN.com - 0 views

  • One of the most frustrating experiences produced by the winter from hell is dealing with the airlines' automated answer systems. Your flight has just been canceled and every second counts in getting an elusive seat. Yet you are stuck in an automated menu spelling out the name of your destination city.
  • Even more frustrating is knowing that you will never get to ask the question you really want to ask, as it isn't an option: "If I drive to Newark and board my Flight to Tel Aviv there will you cancel my whole trip, as I haven't started from my ticketed airport of origin, Ithaca?"
  • A human would immediately understand the question and give you an answer. That's why knowledgeable travelers rush to the nearest airport when they experience a cancellation, so they have a chance to talk to a human agent who can override the computer, rather than rebook by phone (more likely wait on hold and listen to messages about how wonderful a destination Tel Aviv is) or talk to a computer.
  • ...6 more annotations...
  • There is no doubt the IBM supercomputer Watson gave an impressive performance on "Jeopardy!" this week. But I was worried by the computer's biggest fluff Tuesday night. In answer to the question about naming a U.S. city whose first airport is named after a World War II hero and its second after a World War II battle, it gave Toronto, Ontario. Not even close!
  • Both the humans on the program knew the correct answer: Chicago. Even a famously geographically challenged person like me
  • Why did I know it? Because I have spent enough time stranded at O'Hare to have visited the monument to Butch O'Hare in the terminal. Watson, who has not, came up with the wrong answer. This reveals precisely what Watson lacks -- embodiment.
  • Watson has never traveled anywhere. Humans travel, so we know all sorts of stuff about travel and airports that a computer doesn't know. It is the informal, tacit, embodied knowledge that is the hardest for computers to grasp, but it is often such knowledge that is most crucial to our lives.
  • Providing unique answers to questions limited to around 25 words is not the same as dealing with real problems of an emotionally distraught passenger in an open system where there may not be a unique answer.
  • Watson beating the pants out of us on "Jeopardy!" is fun -- rather like seeing a tractor beat a human tug-of-war team. Machines have always been better than humans at some tasks.
Weiye Loh

Rationally Speaking: Ray Kurzweil and the Singularity: visionary genius or pseudoscient... - 0 views

  • I will focus on a single detailed essay he wrote entitled “Superintelligence and Singularity,” which was originally published as chapter 1 of his The Singularity is Near (Viking 2005), and has been reprinted in an otherwise insightful collection edited by Susan Schneider, Science Fiction and Philosophy.
  • Kurzweil begins by telling us that he gradually became aware of the coming Singularity, in a process that, somewhat peculiarly, he describes as a “progressive awakening” — a phrase with decidedly religious overtones. He defines the Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Well, by that definition, we have been through several “singularities” already, as technology has often rapidly and irreversibly transformed our lives.
  • The major piece of evidence for Singularitarianism is what “I [Kurzweil] have called the law of accelerating returns (the inherent acceleration of the rate of evolution, with technological evolution as a continuation of biological evolution).”
  • ...9 more annotations...
  • the first obvious serious objection is that technological “evolution” is in no logical way a continuation of biological evolution — the word “evolution” here being applied with completely different meanings. And besides, there is no scientifically sensible way in which biological evolution has been accelerating over the several billion years of its operation on our planet. So much for scientific accuracy and logical consistency.
  • here is a bit that will give you an idea of why some people think of Singularitarianism as a secular religion: “The Singularity will allow us to transcend [the] limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.”
  • Fig. 2 of that essay shows a progression through (again, entirely arbitrary) six “epochs,” with the next one (#5) occurring when there will be a merger between technological and human intelligence (somehow, a good thing), and the last one (#6) labeled as nothing less than “the universe wakes up” — a nonsensical outcome further described as “patterns of matter and energy in the universe becom[ing] saturated with intelligence processes and knowledge.” This isn’t just science fiction, it is bad science fiction.
  • “a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process.” First, it is highly questionable that one can even measure “technological change” on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of “technological change.” As for the idea that any evolutionary process features exponential growth, I don’t know where Kurzweil got it, but it is simply wrong, for one thing because biological evolution does not have any such feature — as any student of Biology 101 ought to know.
  • Kurzweil’s ignorance of evolution is manifested again a bit later, when he claims — without argument, as usual — that “Evolution is a process of creating patterns of increasing order. ... It’s the evolution of patterns that constitutes the ultimate story of the world. ... Each stage or epoch uses the information-processing methods of the previous epoch to create the next.” I swear, I was fully expecting a scholarly reference to Deepak Chopra at the end of that sentence. Again, “evolution” is a highly heterogeneous term that picks completely different concepts, such as cosmic “evolution” (actually just change over time), biological evolution (which does have to do with the creation of order, but not in Kurzweil’s blatantly teleological sense), and technological “evolution” (which is certainly yet another type of beast altogether, since it requires intelligent design). And what on earth does it mean that each epoch uses the “methods” of the previous one to “create” the next one?
  • As we have seen, the whole idea is that human beings will merge with machines during the ongoing process of ever accelerating evolution, an event that will eventually lead to the universe awakening to itself, or something like that. Now here is the crucial question: how come this has not happened already?
  • To appreciate the power of this argument you may want to refresh your memory about the Fermi Paradox, a serious (though in that case, not a knockdown) argument against the possibility of extraterrestrial intelligent life. The story goes that physicist Enrico Fermi (the inventor of the first nuclear reactor) was having lunch with some colleagues, back in 1950. His companions were waxing poetic about the possibility, indeed the high likelihood, that the galaxy is teeming with intelligent life forms. To which Fermi asked something along the lines of: “Well, where are they, then?”
  • The idea is that even under very pessimistic (i.e., very un-Kurzweil like) expectations about how quickly an intelligent civilization would spread across the galaxy (without even violating the speed of light limit!), and given the mind boggling length of time the galaxy has already existed, it becomes difficult (though, again, not impossible) to explain why we haven’t seen the darn aliens yet.
  • Now, translate that to Kurzweil’s much more optimistic predictions about the Singularity (which allegedly will occur around 2045, conveniently just a bit after Kurzweil’s expected demise, given that he is 63 at the time of this writing). Considering that there is no particular reason to think that planet earth, or the human species, has to be the one destined to trigger the big event, why is it that the universe hasn’t already “awakened” as a result of a Singularity occurring somewhere else at some other time?
Weiye Loh

Roger Pielke Jr.'s Blog: Intolerance: Virtue or Anti-Science "Doublespeak"? - 0 views

  • John Beddington, the Chief Scientific Advisor to the UK government, has identified a need to be "grossly intolerant" of certain views that get in the way of dealing with important policy problems: We are grossly intolerant, and properly so, of racism. We are grossly intolerant, and properly so, of people who [are] anti-homosexuality... We are not—and I genuinely think we should think about how we do this—grossly intolerant of pseudo-science, the building up of what purports to be science by the cherry-picking of the facts and the failure to use scientific evidence and the failure to use scientific method. One way is to be completely intolerant of this nonsense. That we don't kind of shrug it off. We don't say: ‘oh, it's the media’ or ‘oh they would say that wouldn’t they?’ I think we really need, as a scientific community—and this is a very important scientific community—to think about how we do it.
  • Fortunately, Andrew Stirling, research director of the Science Policy Research Unit (which these days I think just goes by SPRU) at the University of Sussex, provides a much healthier perspective: What is this 'pseudoscience'? For Beddington, this seems to include any kind of criticism from non-scientists of new technologies like genetically modified organisms, much advocacy of the 'precautionary principle' in environmental protection, or suggestions that science itself might also legitimately be subjected to moral considerations. Who does Beddington hold to blame for this "politically or morally or religiously motivated nonsense"? For anyone who really values the central principles of science itself, the answer is quite shocking. He is targeting effectively anyone expressing "scepticism" over what he holds to be 'scientific' pronouncements—whether on GM, climate change or any other issue. Note, it is not irrational "denial" on which Beddington is calling for 'gross intolerance', but the eminently reasonable quality of "scepticism"! The alarming contradiction here is that organised, reasoned, scepticism—accepting rational argument from any quarter without favour for social status, cultural affiliations  or institutional prestige—is arguably the most precious and fundamental quality that science itself has (imperfectly) to offer. Without this enlightening aspiration, history shows how society is otherwise all-too-easily shackled by the doctrinal intolerance, intellectual blinkers and authoritarian suppression of criticism so familiar in religious, political, cultural and media institutions.
  • tirling concludes: [T]he basic aspirational principles of science offer the best means to challenge the ubiquitously human distorting pressures of self-serving privilege, hubris, prejudice and power. Among these principles are exactly the scepticism and tolerance against which Beddington is railing (ironically) so emotionally! Of course, scientific practices like peer review, open publication and acknowledgement of uncertainty all help reinforce the positive impacts of these underlying qualities. But, in the real world, any rational observer has to note that these practices are themselves imperfect. Although rarely achieved, it is inspirational ideals of universal, communitarian scepticism—guided by progressive principles of reasoned argument, integrity, pluralism, openness and, of course, empirical experiment—that best embody the great civilising potential of science itself. As the motto of none other than the Royal Society loosely enjoins (also sometimes somewhat ironically) "take nothing on authority". In this colourful instance of straight talking then, John Beddington is himself coming uncomfortably close to a particularly unsettling form of unscientific—even (in a deep sense) anti-scientific—'double speak'.
  • ...1 more annotation...
  • Anyone who really values the progressive civilising potential of science should argue (in a qualified way as here) against Beddington's intemperate call for "complete intolerance" of scepticism. It is the social and human realities shared by politicians, non-government organisations, journalists and scientists themselves, that make tolerance of scepticism so important. The priorities pursued in scientific research and the directions taken by technology are all as fundamentally political as other areas of policy. No matter how uncomfortable and messy the resulting debates may sometimes become, we should never be cowed by any special interest—including that of scientific institutions—away from debating these issues in open, rational, democratic ways. To allow this to happen would be to undermine science itself in the most profound sense. It is the upholding of an often imperfect pursuit of scepticism and tolerance that offer the best way to respect and promote science. Such a position is, indeed, much more in keeping with the otherwise-exemplary work of John Beddington himself.Stirling's eloquent response provides a nice tonic to Beddington's unsettling remarks. Nonetheless, Beddington's perspective should be taken as a clear warning as to the pathological state of highly politicized science these days.
Weiye Loh

Do Androids Dream of Origami Unicorns? | Institute For The Future - 0 views

  • rep.licants is the work that I did for my master thesis. During my studies, I developed an interest about the way most of people are using social networks but also the differences in between someone real identity and his digital one.
  • Back to rep.licants - when I began to think about a project for my master thesis, I really wanted to work on those two thematics (mix in between digital and real identity and a kind of study about how users are using social networks). With the aim to raise discussions about those two thematics.
  • the negative responses are mainly from people who were thinking rep.licants is a real and serious webservice which is giving for free performant bots who are able to almost perfectly replicate the user. And if they are expecting that I understand their disappointment because my bot is far from being performant ! Some were negatives because people were thinking it is kind of scary asking a bot to manage your own digital identity so they rejected the idea.
  • ...6 more annotations...
  • For the positive responses it's mainly people who understood that rep.licants is not about giving performant bots but is more like an experiment (and also a kind of critics about how most of the users are using social networks) where users can mix themselves with a bot and see what is happening. Because even if my bots are crap they can be, sometimes, surprising.
  • But I was kind of surprised that so many people would really expect to have a real bot to manage their social networks account. Twitter never responded and Facebook responded by banning, three times already, my Facebook applications which is managing and running all the Facebook's bots.
  • some people use the bot: a. Just as an experiment, they want to see what the bot can do and if the bot can really improve their virtual social influences. Or users experimenting how long they could keep a bot on their account without their friends noticing it's runt by a bot. b. I saw few time inside my database which stores informations about the users that some of them have a twitter name like "renthouseUSA", so I guess they are using rep.licants for getting a presence on social networks without managing anything and as a commercial goal. c. This is a feedback that I had a lot of time and it is the reason why I am using rep.licants on my own twitter account: If you are precise with the keywords that you give to the bot, it will sometimes find very interesting content related to your interest. My bot made me discover a lot of interesting things, by posting them on Twitter, that I wouldn't never find without him. New informations are coming so fast and in so big quantities that it becomes really difficult to deal with that. For example just on Twitter I follow 80 persons (which is not a lot) all of those persons that I follow is because I know that they might tweet interesting stuffs related to my interests. But I have maybe 10 of those 80 followers who are tweeting quiet a lot (maybe 1-2 tweet per hour) and as I check my twitter feed only one time per day I sometimes loose more than one hour to find interesting tweets in the amount of tweets that my 80 persons posted. And this is only for Twitter ! I really think that we need more and more personal robots for filtering information for us. And this is a very positive point I found about having a bot that I could never imagine when I was beginning my project.
  • One surprising bugs was when the Twitter's bots began to speak to themselves. It's maybe boring for some users to see their own account speak to itself one time per day but when I discovered the bug I found it very funny. So I decided to keep that bug !
  • this video of a chatbot having a conversation with itself went viral – perhaps in part because the conversation immediately turned towards more existentialist questions and responses.  The conversation was recorded at the Cornell Creative Machines Lab, where the faculty are researching how to make helper bots. 

     


  • The questions that rep.licants poses are deep human and social ones – laced with uncertainties about the kinds of interactions we count as normal and the responsibilities we owe to ourselves and each other.  Seeing these bots carry out conversations with themselves and with human counterparts (much less other non-human counterparts) allows us to take tradition social and technological research into a different territory – asking not only what it means to be human – but also what it means to be non-human.
1 - 6 of 6
Showing 20 items per page