Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged future

Rss Feed Group items tagged

Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Read Aubrey McClendon's response to "misleading" New York Times article (1) - 0 views

  • Since the shale gas revolution and resulting confirmation of enormous domestic gas reserves, there has been a relatively small group of analysts and geologists who have doubted the future of shale gas.  Their doubts have become very convenient to the environmental activists I mentioned earlier. This particular NYT reporter has apparently sought out a few of the doubters to fashion together a negative view of the U.S. natural gas industry. We also believe certain media outlets, especially the once venerable NYT, are being manipulated by those whose environmental or economic interests are being threatened by abundant natural gas supplies. We have seen for example today an email from a leader of a group called the Environmental Working Group who claimed today’s articles as this NYT reporter’s "second great story" (the first one declaring that produced water disposal from shale gas wells was unsafe) and that “we've been working with him for over 8 months. Much more to come. . .”
  • this reporter’s claim of impending scarcity of natural gas supply contradicts the facts and the scientific extrapolation of those facts by the most sophisticated reservoir engineers and geoscientists in the world. Not just at Chesapeake, but by experts at many of the world’s leading energy companies that have made multi-billion-dollar, long-term investments in U.S. shale gas plays, with us and many other companies. Notable examples of these companies, besides the leading independents such as Chesapeake, Devon, Anadarko, EOG, EnCana, Talisman and others, include these leading global energy giants:  Exxon, Shell, BP, Chevron, Conoco, Statoil, BHP, Total, CNOOC, Marathon, BG, KNOC, Reliance, PetroChina, Mitsui, Mitsubishi and ENI, among others.  Is it really possible that all of these companies, with a combined market cap of almost $2 trillion, know less about shale gas than a NYT reporter, a few environmental activists and a handful of shale gas doubters?
  •  
    Administrator's Note: This email was sent to all Chesapeake employees from CEO Aubrey McClendon, in response to a Sunday New York Times piece by Ian Urbina entitled "Insiders Sound an Alarm Amid a Natural Gas Rush."   FW: CHK's response to 6.26.11 NYT article on shale gas   From: Aubrey McClendon Sent: Sunday, June 26, 2011 8:37 PM To: All Employees   Dear CHK Employees:  By now many of you may have read or heard about a story in today's New York Times (NYT) that questioned the productive capacity and economic quality of U.S. natural gas shale reserves, as well as energy reserve accounting practices used by E&P companies, including Chesapeake.  The story is misleading, at best, and is the latest in a series of articles produced by this publication that obviously have an anti-industry bias.  We know for a fact that today's NYT story is the handiwork of the same group of environmental activists who have been the driving force behind the NYT's ongoing series of negative articles about the use of fracking and its importance to the US natural gas supply growth revolution - which is changing the future of our nation for the better in multiple areas.  It is not clear to me exactly what these environmental activists are seeking to offer as their alternative energy plan, but most that I have talked to continue to naively presume that our great country need only rely on wind and solar energy to meet our current and future energy needs. They always seem to forget that wind and solar produce less than 2% of America electricity today and are completely non-economic without ongoing government and ratepayer subsidies.
Weiye Loh

The hidden philosophy of David Foster Wallace - Salon.com Mobile - 0 views

  • Taylor's argument, which he himself found distasteful, was that certain logical and seemingly unarguable premises lead to the conclusion that even in matters of human choice, the future is as set in stone as the past. We may think we can affect it, but we can't.
  • human responsibility — that, with advances in neuroscience, is of increasing urgency in jurisprudence, social codes and personal conduct. And it also shows a brilliant young man struggling against fatalism, performing exquisite exercises to convince others, and maybe himself, that what we choose to do is what determines the future, rather than the future more or less determining what we choose to do. This intellectual struggle on Wallace's part seems now a kind of emotional foreshadowing of his suicide. He was a victim of depression from an early age — even during his undergraduate years — and the future never looks more intractable than it does to someone who is depressed.
  • "Fate, Time, and Language" reminded me of how fond philosophers are of extreme situations in creating their thought experiments. In this book alone we find a naval battle, the gallows, a shotgun, poison, an accident that leads to paraplegia, somebody stabbed and killed, and so on. Why not say "I have a pretzel in my hand today. Tomorrow I will have eaten it or not eaten it" instead of "I have a gun in my hand and I will either shoot you through the heart and feast on your flesh or I won't"? Well, OK — the answer is easy: The extreme and violent scenarios catch our attention more forcefully than pretzels do. Also, philosophers, sequestered and meditative as they must be, may long for real action — beyond beekeeping.
  • ...1 more annotation...
  • Wallace, in his essay, at the very center of trying to show that we can indeed make meaningful choices, places a terrorist in the middle of Amherst's campus with his finger on the trigger mechanism of a nuclear weapon. It is by far the most narratively arresting moment in all of this material, and it says far more about the author's approaching antiestablishment explosions of prose and his extreme emotional makeup than it does about tweedy profs fantasizing about ordering their ships into battle. For, after all, who, besides everyone around him, would the terrorist have killed?
  •  
    In 1962, a philosopher (and world-famous beekeeper) named Richard Taylor published a soon-to-be-notorious essay called "Fatalism" in the Philosophical Review.
Weiye Loh

Book Review: Future Babble by Dan Gardner « Critical Thinking « Skeptic North - 0 views

  • I predict that you will find this review informative. If you do, you will congratulate my foresight. If you don’t, you’ll forget I was wrong.
  • My playful intro summarizes the main thesis of Gardner’s excellent book, Future Babble: Why Expert Predictions Fail – and Why We Believe Them Anyway.
  • In Future Babble, the research area explored is the validity of expert predictions, and the primary researcher examined is Philip Tetlock. In the early 1980s, Tetlock set out to better understand the accuracy of predictions made by experts by conducting a methodologically sound large-scale experiment.
  • ...10 more annotations...
  • Gardner presents Tetlock’s experimental design in an excellent way, making it accessible to the lay person. Concisely, Tetlock examined 27450 judgments in which 284 experts were presented with clear questions whose answers could later be shown to be true or false (e.g., “Will the official unemployment rate be higher, lower or the same a year from now?”). For each prediction, the expert must answer clearly and express their degree of certainty as a percentage (e.g., dead certain = 100%). The usage of precise numbers adds increased statistical options and removes the complications of vague or ambiguous language.
  • Tetlock found the surprising and disturbing truth “that experts’ predictions were no more accurate than random guesses.” (p. 26) An important caveat is that there was a wide range of capability, with some experts being completely out of touch, and others able to make successful predictions.
  • What distinguishes the impressive few from the borderline delusional is not whether they’re liberal or conservative. Tetlock’s data showed political beliefs made no difference to an expert’s accuracy. The same is true of optimists and pessimists. It also made no difference if experts had a doctorate, extensive experience, or access to classified information. Nor did it make a difference if experts were political scientists, historians, journalists, or economists.” (p. 26)
  • The experts who did poorly were not comfortable with complexity and uncertainty, and tended to reduce most problems to some core theoretical theme. It was as if they saw the world through one lens or had one big idea that everything else had to fit into. Alternatively, the experts who did decently were self-critical, used multiple sources of information and were more comfortable with uncertainty and correcting their errors. Their thinking style almost results in a paradox: “The experts who were more accurate than others tended to be less confident they were right.” (p.27)
  • Gardner then introduces the terms ‘Hedgehog’ and ‘Fox’ to refer to bad and good predictors respectively. Hedgehogs are the ones you see pushing the same idea, while Foxes are likely in the background questioning the ability of prediction itself while making cautious proposals. Foxes are more likely to be correct. Unfortunately, it is Hedgehogs that we see on the news.
  • one of Tetlock’s findings was that “the bigger the media profile of an expert, the less accurate his predictions.” (p.28)
  • Chapter 2 – The Unpredictable World An exploration into how many events in the world are simply unpredictable. Gardner discusses chaos theory and necessary and sufficient conditions for events to occur. He supports the idea of actually saying “I don’t know,” which many experts are reluctant to do.
  • Chapter 3 – In the Minds of Experts A more detailed examination of Hedgehogs and Foxes. Gardner discusses randomness and the illusion of control while using narratives to illustrate his points à la Gladwell. This chapter provides a lot of context and background information that should be very useful to those less initiated.
  • Chapter 6 – Everyone Loves a Hedgehog More about predictions and how the media picks up hedgehog stories and talking points without much investigation into their underlying source or concern for accuracy. It is a good demolition of the absurdity of so many news “discussion shows.” Gardner demonstrates how the media prefer a show where Hedgehogs square off against each other, and it is important that these commentators not be challenged lest they become exposed and, by association, implicate the flawed structure of the program/network.Gardner really singles out certain people, like Paul Ehrlich, and shows how they have been wrong many times and yet can still get an audience.
  • “An assertion that cannot be falsified by any conceivable evidence is nothing more than dogma. It can’t be debated. It can’t be proven or disproven. It’s just something people choose to believe or not for reasons that have nothing to do with fact and logic. And dogma is what predictions become when experts and their followers go to ridiculous lengths to dismiss clear evidence that they failed.”
Weiye Loh

Skepticblog » The Immortalist - 0 views

  • There is something almost religious about Kurzweil’s scientism, an observation he himself makes in the film, noting the similarities between his goals and that of the world’s religions: “the idea of a profound transformation in the future, eternal life, bringing back the dead—but the fact that we’re applying technology to achieve the goals that have been talked about in all human philosophies is not accidental because it does reflect the goal of humanity.” Although the film never discloses Kurzweil’s religious beliefs (he was raised by Jewish parents as a Unitarian Universalist), in a (presumably) unintentionally humorous moment that ends the film Kurzweil reflects on the God question and answers it himself: “Does God exist? I would say, ‘Not yet.’”
  • Transcendent Man is Barry Ptolemy’s beautifully crafted and artfully edited documentary film about Kurzweil and his quest to save humanity.
  • Transcendent Man pulls viewers in through Kurzweil’s visage of a future in which we merge with our machines and vastly extend our longevity and intelligence to the point where even death will be defeated. This point is what Kurzweil calls the “singularity” (inspired by the physics term denoting the infinitely dense point at the center of a black hole), and he arrives at the 2029 date by extrapolating curves based on what he calls the “law of accelerating returns.” This is “Moore’s Law” (the doubling of computing power every year) on steroids, applied to every conceivable area of science, technology and economics.
  • ...6 more annotations...
  • Ptolemy’s portrayal of Kurzweil is unmistakably positive, but to his credit he includes several critics from both religion and science. From the former, a radio host named Chuck Missler, a born-again Christian who heads the Koinonia Institute (“dedicated to training and equipping the serious Christian to sojourn in today’s world”), proclaims: “We have a scenario laid out that the world is heading for an Armageddon and you and I are going to be the generation that’s alive that is going to see all this unfold.” He seems to be saying that Kurzweil is right about the second coming, but wrong about what it is that is coming. (Of course, Missler’s prognostication is the N+1 failed prophecy that began with Jesus himself, who told his followers (Mark 9:1): “Verily I say unto you, That there be some of them that stand here, which shall not taste of death, till they have seen the kingdom of God come with power.”) Another religiously-based admonition comes from the Stanford University neuroscientist William Huribut, who self-identifies as a “practicing Christian” who believes in immortality, but not in the way Kurzweil envisions it. “Death is conquered spiritually,” he pronounced.
  • On the science side of the ledger, Neil Gershenfeld, director of the Center for Bits and Atoms at the Massachusetts Institute of Technology, sagely notes: “What Ray does consistently is to take a whole bunch of steps that everybody agrees on and take principles for extrapolating that everybody agrees on and show they lead to things that nobody agrees on.” Likewise, the estimable futurist Kevin Kelly, whose 2010 book What Technology Wants paints a much more realistic portrait of what our futures may (or may not) hold
  • Kelly agrees that Kurzweil’s exponential growth curves are accurate but that the conclusions and especially the inspiration drawn from them are not. “He seems to have no doubts about it and in this sense I think he is a prophetic type figure who is completely sure and nothing can waiver his absolute certainty about this. So I would say he is a modern day prophet…that’s wrong.”
  • Transcendent Man is clearly meant to be an uplifting film celebrating all the ways science and technology have and are going to enrich our lives.
  • An especially lachrymose moment is when Kurzweil is rifling through his father’s journals and documents in a storage room dedicated to preserving his memory until the day that all this “data” (including Ray’s own fading memories) can be reconfigured into an A.I. simulacrum so that father and son can be reunited.
  • Although Kurzweil says he is optimistic and cheery about life, he can’t seem to stop talking about death: “It’s such a profoundly sad, lonely feeling that I really can’t bear it,” he admits. “So I go back to thinking about how I’m not going to die.” One wonders how much of life he is missing by over thinking death, or how burdensome it must surely be to imbibe over 200 supplement tables a day and have your blood tested and cleansed every couple of months, all in an effort to reprogram the body’s biochemistry.
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

Skepticblog » Investing in Basic Science - 0 views

  • A recent editorial in the New York Times by Nicholas Wade raises some interesting points about the nature of basic science research – primarily that its’ risky.
  • As I have pointed out about the medical literature, researcher John Ioaniddis has explained why most published studies turn out in retrospect to be wrong. The same is true of most basic science research – and the underlying reason is the same. The world is complex, and most of our guesses about how it might work turn out to be either flat-out wrong, incomplete, or superficial. And so most of our probing and prodding of the natural world, looking for the path to the actual answer, turn out to miss the target.
  • research costs considerable resources of time, space, money, opportunity, and people-hours. There may also be some risk involved (such as to subjects in the clinical trial). Further, negative studies are actually valuable (more so than terrible pictures). They still teach us something about the world – they teach us what is not true. At the very least this narrows the field of possibilities. But the analogy holds in so far as the goal of scientific research is to improve our understanding of the world and to provide practical applications that make our lives better. Wade writes mostly about how we fund research, and this relates to our objectives. Most of the corporate research money is interested in the latter – practical (and profitable) applications. If this is your goal, than basic science research is a bad bet. Most investments will be losers, and for most companies this will not be offset by the big payoffs of the rare winners. So many companies will allow others to do the basic science (government, universities, start up companies) then raid the winners by using their resources to buy them out, and then bring them the final steps to a marketable application. There is nothing wrong or unethical about this. It’s a good business model.
  • ...8 more annotations...
  • What, then, is the role of public (government) funding of research? Primarily, Wade argues (and I agree), to provide infrastructure for expensive research programs, such as building large colliders.
  • the more the government invests in basic science and infrastructure, the more winners will emerge that private industry can then capitalize on. This is a good way to build a competitive dynamic economy.
  • But there is a pitfall – prematurely picking winners and losers. Wade give the example of California investing specifically into developing stem cell treatments. He argues that stem cells, while promising, do not hold a guarantee of eventual success, and perhaps there are other technologies that will work and are being neglected. The history of science and technology has clearly demonstrated that it is wickedly difficult to predict the future (and all those who try are destined to be mocked by future generations with the benefit of perfect hindsight). Prematurely committing to one technology therefore contains a high risk of wasting a great deal of limited resources, and missing other perhaps more fruitful opportunities.
  • The underlying concept is that science research is a long-term game. Many avenues of research will not pan out, and those that do will take time to inspire specific applications. The media, however, likes catchy headlines. That means when they are reporting on basic science research journalists ask themselves – why should people care? What is the application of this that the average person can relate to? This seems reasonable from a journalistic point of view, but with basic science reporting it leads to wild speculation about a distant possible future application. The public is then left with the impression that we are on the verge of curing the common cold or cancer, or developing invisibility cloaks or flying cars, or replacing organs and having household robot servants. When a few years go by and we don’t have our personal android butlers, the public then thinks that the basic science was a bust, when in fact there was never a reasonable expectation that it would lead to a specific application anytime soon. But it still may be on track for interesting applications in a decade or two.
  • this also means that the government, generally, should not be in the game of picking winners an losers – putting their thumb on the scale, as it were. Rather, they will get the most bang for the research buck if they simply invest in science infrastructure, and also fund scientists in broad areas.
  • The same is true of technology – don’t pick winners and losers. The much-hyped “hydrogen economy” comes to mind. Let industry and the free market sort out what will work. If you have to invest in infrastructure before a technology is mature, then at least hedge your bets and keep funding flexible. Fund “alternative fuel” as a general category, and reassess on a regular basis how funds should be allocated. But don’t get too specific.
  • Funding research but leaving the details to scientists may be optimal
  • The scientific community can do their part by getting better at communicating with the media and the public. Try to avoid the temptation to overhype your own research, just because it is the most interesting thing in the world to you personally and you feel hype will help your funding. Don’t make it easy for the media to sensationalize your research – you should be the ones trying to hold back the reigns. Perhaps this is too much to hope for – market forces conspire too much to promote sensationalism.
Weiye Loh

The Fake Scandal of Climategate - 0 views

  • The most comprehensive inquiry was the Independent Climate Change Email Review led by Sir Muir Russell, commissioned by UEA to examine the behaviour of the CRU scientists (but not the scientific validity of their work). It published its final report in July 2010
  • It focused on what the CRU scientists did, not what they said, investigating the evidence for and against each allegation. It interviewed CRU and UEA staff, and took 111 submissions including one from CRU itself. And it also did something the media completely failed to do: it attempted to put the actions of CRU scientists into context.
    • Weiye Loh
       
      Data, in the form of email correspondence, requires context to be interpreted "objectively" and "accurately" =)
  • The Review went back to primary sources to see if CRU really was hiding or falsifying their data. It considered how much CRU’s actions influenced the IPCC’s conclusions about temperatures during the past millennium. It commissioned a paper by Dr Richard Horton, editor of The Lancet, on the context of scientific peer review. And it asked IPCC Review Editors how much influence individuals could wield on writing groups.
  • ...16 more annotations...
  • Many of these are things any journalist could have done relatively easily, but few ever bothered to do.
  • the emergence of the blogosphere requires significantly more openness from scientists. However, providing the details necessary to validate large datasets can be difficult and time-consuming, and how FoI laws apply to research is still an evolving area. Meanwhile, the public needs to understand that science cannot and does not produce absolutely precise answers. Though the uncertainties may become smaller and better constrained over time, uncertainty in science is a fact of life which policymakers have to deal with. The chapter concludes: “the Review would urge all scientists to learn to communicate their work in ways that the public can access and understand”.
  • email is less formal than other forms of communication: “Extreme forms of language are frequently applied to quite normal situations by people who would never use it in other communication channels.” The CRU scientists assumed their emails to be private, so they used “slang, jargon and acronyms” which would have been more fully explained had they been talking to the public. And although some emails suggest CRU went out of their way to make life difficult for their critics, there are others which suggest they were bending over backwards to be honest. Therefore the Review found “the e-mails cannot always be relied upon as evidence of what actually occurred, nor indicative of actual behaviour that is extreme, exceptional or unprofessional.” [section 4.3]
  • when put into the proper context, what do these emails actually reveal about the behaviour of the CRU scientists? The report concluded (its emphasis):
  • we find that their rigour and honesty as scientists are not in doubt.
  • we did not find any evidence of behaviour that might undermine the conclusions of the IPCC assessments.
  • “But we do find that there has been a consistent pattern of failing to display the proper degree of openness, both on the part of the CRU scientists and on the part of the UEA, who failed to recognize not only the significance of statutory requirements but also the risk to the reputation of the University and indeed, to the credibility of UK climate science.” [1.3]
  • The argument that Climategate reveals an international climate science conspiracy is not really a very skeptical one. Sure, it is skeptical in the weak sense of questioning authority, but it stops there. Unlike true skepticism, it doesn’t go on to objectively examine all the evidence and draw a conclusion based on that evidence. Instead, it cherry-picks suggestive emails, seeing everything as incontrovertible evidence of a conspiracy, and concludes all of mainstream climate science is guilty by association. This is not skepticism; this is conspiracy theory.
    • Weiye Loh
       
      How then do we know that we have examined ALL the evidence? What about the context of evidence then? 
  • The media dropped the ball There is a famous quotation attributed to Mark Twain: “A lie can travel halfway around the world while the truth is putting on its shoes.” This is more true in the internet age than it was when Mark Twain was alive. Unfortunately, it took months for the Climategate inquiries to put on their shoes, and by the time they reported, the damage had already been done. The media acted as an uncritical loudspeaker for the initial allegations, which will now continue to circulate around the world forever, then failed to give anywhere near the same amount of coverage to the inquiries clearing the scientists involved. For instance, Rupert Murdoch’s The Australian published no less than 85 stories about Climategate, but not one about the Muir Russell inquiry.
  • Even the Guardian, who have a relatively good track record on environmental reporting and were quick to criticize the worst excesses of climate conspiracy theorists, could not resist the lure of stolen emails. As George Monbiot writes, journalists see FoI requests and email hacking as a way of keeping people accountable, rather than the distraction from actual science which they are to scientists. In contrast, CRU director Phil Jones says: “I wish people would spend as much time reading my scientific papers as they do reading my e-mails.”
  • This is part of a broader problem with climate change reporting: the media holds scientists to far higher standards than it does contrarians. Climate scientists have to be right 100% of the time, but contrarians apparently can get away with being wrong nearly 100% of the time. The tiniest errors of climate scientists are nitpicked and blown out of all proportion, but contrarians get away with monstrous distortions and cherry-picking of evidence. Around the same time The Australian was bashing climate scientists, the same newspaper had no problem publishing Viscount Monckton’s blatant misrepresentations of IPCC projections (not to mention his demonstrably false conspiracy theory that the Copenhagen summit was a plot to establish a world government).
  • In the current model of environmental reporting, the contrarians do not lose anything by making baseless accusations. In fact, it is in their interests to throw as much mud at scientists as possible to increase the chance that some of it will stick in the public consciousness. But there is untold damage to the reputation of the scientists against whom the accusations are being made. We can only hope that in future the media will be less quick to jump to conclusions. If only editors and producers would stop and think for a moment about what they’re doing: they are playing with the future of the planet.
  • As worthy as this defense is, surely this is the kind of political bun-fight SkS has resolutely stayed away from since its inception. The debate can only become a quagmire of competing claims, because this is part of an adversarial process that does not depend on, or even require, scientific evidence. Only by sticking resolutely to the science and the advocacy of the scientific method can SkS continue to avoid being drowned in the kind of mud through which we are obliged to wade elsewhere.
  • I disagree with gp. It is past time we all got angry, very angry, at what these people have done and continue to do. Dispassionate science doesn't cut it with the denial industry or with the media (and that "or" really isn't there). It's time to fight back with everything we can throw back at them.
  • The fact that three quick fire threads have been run on Climatgate on this excellent blog in the last few days is an indication that Climategate (fairly or not) has does serious damage to the cause of AGW activism. Mass media always overshoots and exaggerates. The AGW alarmists had a very good run - here in Australia protagonists like Tim Flannery and our living science legend Robin Williams were talking catastrophe - the 10 year drought was definitely permanent climate change - rivers might never run again - Robin (100 metre sea level rise) Williams refused to even read the Climategate emails. Climategate swung the pendumum to the other extreme - the scientists (nearly all funded by you and me) were under the pump. Their socks rubbed harder on their sandals as they scrambled for clear air. Cries about criminal hackers funded by big oil, tobacco, rightist conspirators etc were heard. Panchuri cried 'voodoo science' as he denied ever knowing about objections to the preposterous 2035 claim. How things change in a year. The drought is broken over most of Australia - Tim Flannery has gone quiet and Robin Williams is airing a science journo who says that AGW scares have been exaggerated. Some balance might have been restored as the pendulum swung, and our hard working misunderstood scientist bretheren will take more care with their emails in future.
  • "Perhaps a more precise description would be that a common pattern in global warming skeptic arguments is to focus on narrow pieces of evidence while ignoring other evidence that contradicts their argument." And this is the issue the article discuss, but in my opinion this article is in guilt of this as well. It focus on a narrow set of non representative claims, claims which is indeed pure propaganda by some skeptics, however the article also suggest guilt buy association and as such these propaganda claims then gets attributed to the be opinions of the entire skeptic camp. In doing so, the OP becomes guilty of the very same issue the OP tries to address. In other words, the issue I try to raise is not about the exact numbers or figures or any particular facts but the fact that the claim I quoted is obvious nonsense. It is nonsense because it a sweeping statement with no specifics and as such it is an empty statement and means nothing. A second point I been thinking about when reading this article is why should scientist be granted immunity to dirty tricks/propaganda in a political debate? Is it because they speak under the name of science? If that is the case, why shall we not grant the same right to other spokesmen for other organization?
    • Weiye Loh
       
      The aspiration to examine ALL evidence is again called into question here. Is it really possible to examine ALL evidence? Even if we have examined them, can we fully represent our examination? From our lab, to the manuscript, to the journal paper, to the news article, to 140characters tweets?
Weiye Loh

Roger Pielke Jr.'s Blog: The Flip Side of Extreme Event Attribution - 0 views

  • It is just logical that one cannot make the claim that action on climate change will influence future extreme events without first being able to claim that greenhouse gas emissions have a discernible influence on those extremes. This probably helps to explain why there is such a push to classify the attribution issue as settled. But this is just piling on one bad argument on top of another.
  • Even if you believe that attribution has been achieved, these are bad arguments for the simple fact that detecting the effects on the global climate system of emissions reductions would take many, many (many!) decades.  For instance, for an aggressive climate policy that would stabilize carbon dioxide at 450 ppm, detecting a change in average global temperatures would necessarily occur in the second half of this century.  Detection of changes in extreme events would take even longer.
  • To suggest that action on greenhouse gas emissions is a mechanism for modulating the impacts of extreme events remains a highly misleading argument.  There are better justifications for action on carbon dioxide that do not depend on contorting the state of the science.
  •  
    It is just logical that one cannot make the claim that action on climate change will influence future extreme events without first being able to claim that greenhouse gas emissions have a discernible influence on those extremes. This probably helps to explain why there is such a push to classify the attribution issue as settled. But this is just piling on one bad argument on top of another.
Weiye Loh

The Greening of the American Brain - TIME - 0 views

  • The past few years have seen a marked decline in the percentage of Americans who believe what scientists say about climate, with belief among conservatives falling especially fast. It's true that the science community has hit some bumps — the IPCC was revealed to have made a few dumb errors in its recent assessment, and the "Climategate" hacked emails showed scientists behaving badly. But nothing changed the essential truth that more man-made CO2 means more warming; in fact, the basic scientific case has only gotten stronger. Yet still, much of the American public remains unconvinced — and importantly, last November that public returned control of the House of Representatives to a Republican party that is absolutely hostile to the basic truths of climate science.
  • facts and authority alone may not shift people's opinions on climate science or many other topics. That was the conclusion I took from the Climate, Mind and Behavior conference, a meeting of environmentalists, neuroscientists, psychologists and sociologists that I attended last week at the Garrison Institute in New York's Hudson Valley. We like to think of ourselves as rational creatures who select from the choices presented to us for maximum individual utility — indeed, that's the essential principle behind most modern economics. But when you do assume rationality, the politics of climate change get confusing. Why would so many supposedly rational human beings choose to ignore overwhelming scientific authority?
  • Maybe because we're not actually so rational after all, as research is increasingly showing. Emotions and values — not always fully conscious — play an enormous role in how we process information and make choices. We are beset by cognitive biases that throw what would be sound decision-making off-balance. Take loss aversion: psychologists have found that human beings tend to be more concerned about avoiding losses than achieving gains, holding onto what they have even when this is not in their best interests. That has a simple parallel to climate politics: environmentalists argue that the shift to a low-carbon economy will create abundant new green jobs, but for many people, that prospect of future gain — even if it comes with a safer planet — may not be worth the risk of losing the jobs and economy they have.
  • ...4 more annotations...
  • What's the answer for environmentalists? Change the message and frame the issue in a way that doesn't trigger unconscious opposition among so many Americans. That can be a simple as using the right labels: a recent study by researchers at the University of Michigan found that Republicans are less skeptical of "climate change" than "global warming," possibly because climate change sounds less specific. Possibly too because so broad a term includes the severe snowfalls of the past winter that can be a paradoxical result of a generally warmer world. Greens should also pin their message on subjects that are less controversial, like public health or national security. Instead of issuing dire warnings about an apocalyptic future — which seems to make many Americans stop listening — better to talk about the present generation's responsibility to the future, to bequeath their children and grandchildren a safer and healthy planet.
  • Group identification also plays a major role in how we make decisions — and that's another way facts can get filtered. Declining belief in climate science has been, for the most part in America, a conservative phenomenon. On the surface, that's curious: you could expect Republicans to be skeptical of economic solutions to climate change like a carbon tax, since higher taxes tend to be a Democratic policy, but scientific information ought to be non-partisan. Politicians never debate the physics of space travel after all, even if they argue fiercely over the costs and priorities associated with it. That, however, is the power of group thinking; for most conservative Americans, the very idea of climate science has been poisoned by ideologues who seek to advance their economic arguments by denying scientific fact. No additional data — new findings about CO2 feedback loops or better modeling of ice sheet loss — is likely to change their mind.
  • The bright side of all this irrationality is that it means human beings can act in ways that sometimes go against their immediate utility, sacrificing their own interests for the benefit of the group.
  • Our brains develop socially, not just selfishly, which means sustainable behavior — and salvation for the planet — may not be as difficult as it sometimes seem. We can motivate people to help stop climate change — it may just not be climate science that convinces them to act.
Weiye Loh

Some Scientists Fear Computer Chips Will Soon Hit a Wall - NYTimes.com - 0 views

  • The problem has the potential to counteract an important principle in computing that has held true for decades: Moore’s Law. It was Gordon Moore, a founder of Intel, who first predicted that the number of transistors that could be nestled comfortably and inexpensively on an integrated circuit chip would double roughly every two years, bringing exponential improvements in consumer electronics.
  • In their paper, Dr. Burger and fellow researchers simulated the electricity used by more than 150 popular microprocessors and estimated that by 2024 computing speed would increase only 7.9 times, on average. By contrast, if there were no limits on the capabilities of the transistors, the maximum potential speedup would be nearly 47 times, the researchers said.
  • Some scientists disagree, if only because new ideas and designs have repeatedly come along to preserve the computer industry’s rapid pace of improvement. Dr. Dally of Nvidia, for instance, is sanguine about the future of chip design. “The good news is that the old designs are really inefficient, leaving lots of room for innovation,” he said.
  • ...3 more annotations...
  • Shekhar Y. Borkar, a fellow at Intel Labs, called Dr. Burger’s analysis “right on the dot,” but added: “His conclusions are a little different than what my conclusions would have been. The future is not as golden as it used to be, but it’s not bleak either.” Dr. Borkar cited a variety of new design ideas that he said would help ease the limits identified in the paper. Intel recently developed a way to vary the power consumed by different parts of a processor, making it possible to have both slower, lower-power transistors as well as faster-switching ones that consume more power. Increasingly, today’s processor chips contain two or more cores, or central processing units, that make it possible to use multiple programs simultaneously. In the future, Intel computers will have different kinds of cores optimized for different kinds of problems, only some of which require high power.
  • And while Intel announced in May that it had found a way to use 3-D design to crowd more transistors onto a single chip, that technology does not solve the energy problem described in the paper about dark silicon. The authors of the paper said they had tried to account for some of the promised innovation, and they argued that the question was how far innovators could go in overcoming the power limits.
  • “It’s one of those ‘If we don’t innovate, we’re all going to die’ papers,” Dr. Patterson said in an e-mail. “I’m pretty sure it means we need to innovate, since we don’t want to die!”
juliet huang

tools to live forever? - 1 views

  •  
    According to a news story on nanotechnology, in the future, the wealthy will be able to make use of nanotechnology to modify parts of their existing or future genetic heritage, ie they can alter body parts in non-invasive procedures, or modify future children's anomalies. http://www.heraldsun.com.au/business/fully-frank/the-tools-to-live-forever/story-e6frfinf-1225791751968 these will then help them evolve into a different species, a better species. ethical questions: most of the issues we've talked about in ethics are at the macro level, perpetuating a social group's agenda. however, biotechnology has the potential to make this divide a reality. it's no longer an ethical question but it has the power to make what we discuss in class a reality. to frame it as an ethical perspective, who gets to decide how is the power evenly distributed? power will always be present behind the use of technologies, but who will decide how this technology is used, and for whose good? and if its for a larger good, then, who can moderate this technology usage to ensure all social actors are represented?
juliet huang

The tools to live forever ? - 1 views

According to a news story on nanotechnology, in the future, the wealthy will be able to make use of nanotechnology to modify parts of their existing or future genetic heritage, ie they can alter bo...

nanotechnology biotechnology

started by juliet huang on 28 Oct 09 no follow-up yet
Weiye Loh

BioCentre - 0 views

  • Humanity’s End. The main premise of the book is that proposals that would supposedly promise to make us smarter like never before or add thousands of years to our live seem rather far fetched and the domain of mere fantasy. However, it is these very proposals which form the basis of many of the ideas and thoughts presented by advocates of radical enhancement and which are beginning to move from the sidelines to the centre of main stream discussion. A variety of technologies and therapies are being presented to us as options to expand our capabilities and capacities in order for us to become something other than human.
  • Agar takes issue with this and argues against radical human enhancement. He structures his analysis and discussion by focusing on four key figures and their proposals which help to form the core of the case for radical enhancement debate.  First to be examined by Agar is Ray Kurzweil who argues that Man and Machine will become one as technology allows us to transcend our biology. Second, is Aubrey de Grey who is a passionate advocate and pioneer of anti-ageing therapies which allow us to achieve “longevity escape velocity”. Next is Nick Bostrom, a leading transhumanist who defends the morality and rationality of enhancement and finally James Hughes who is a keen advocate of a harmonious democracy of the enhanced and un-enhanced.
  • He avoids falling into any of the pitfalls of basing his argument solely upon the “playing God” question but instead seeks to posit a well founded argument in favour of the precautionary principle.
  • ...10 more annotations...
  • Agar directly tackles Hughes’ ideas of a “democratic transhumanism.” Here as post-humans and humans live shoulder to shoulder in wonderful harmony, all persons have access to the technologies they want in order to promote their own flourishing.  Under girding all of this is the belief that no human should feel pressurised to become enhance. Agar finds no comfort with this and instead can foresee a situation where it would be very difficult for humans to ‘choose’ to remain human.  The pressure to radically enhance would be considerable given the fact that the radically enhanced would no doubt be occupying the positions of power in society and would consider the moral obligation to utilise to the full enhancement techniques as being a moral imperative for the good of society.  For those who were able to withstand then a new underclass would no doubt emerge between the enhanced and the un-enhanced. This is precisely the kind of society which Hughes appears to be overly optimistic will not emerge but which is more akin to Lee Silver’s prediction of the future with the distinction made between the "GenRich" and the "naturals”.  This being the case, the author proposes that we have two options: radical enhancement is either enforced across the board or banned outright. It is the latter option which Agar favours but crucially does not elaborate further on so it is unclear as to how he would attempt such a ban given the complexity of the issue. This is disappointing as any general initial reflections which the author felt able to offer would have added to the discussion and added further strength to his line of argument.
  • A Transhuman Manifesto The final focus for Agar is James Hughes, who published his transhumanist manifesto Citizen Cyborg in 2004. Given the direct connection with politics and public policy this for me was a particularly interesting read. The basic premise to Hughes argument is that once humans and post humans recognise each other as citizens then this will mark the point at which they will be able to get along with each other.
  • Agar takes to task the argument Bostrom made with Toby Ord, concerning claims against enhancement. Bostrom and Ord argue that it boils down to a preference for the status quo; current human intellects and life spans are preferred and deemed best because they are what we have now and what we are familiar with (p. 134).  Agar discusses the fact that in his view, Bostrom falls into a focalism – focusing on and magnifying the positives whilst ignoring the negative implications.  Moreover, Agar goes onto develop and reiterate his earlier point that the sort of radical enhancements Bostrom et al enthusiastically support and promote take us beyond what is human so they are no longer human. It therefore cannot be said to be human enhancement given the fact that the traits or capacities that such enhancement afford us would be in many respects superior to ours, but they would not be ours.
  • With his law of accelerating returns and talk of the Singularity Ray Kurzweil proposes that we are speeding towards a time when our outdated systems of neurons and synapses will be traded for far more efficient electronic circuits, allowing us to become artificially super-intelligent and transferring our minds from brains into machines.
  • Having laid out the main ideas and thinking behind Kurzweil’s proposals, Agar makes the perceptive comment that despite the apparent appeal of greater processing power it would nevertheless be no longer human. Introducing chips to the human body and linking into the human nervous system to computers as per Ray Kurzweil’s proposals will prove interesting but it goes beyond merely creating a copy of us in order to that future replication and uploading can take place. Rather it will constitute something more akin to an upgrade. Electrochemical signals that the brain use to achieve thought travel at 100 metres per second. This is impressive but contrast this with the electrical signals in a computer which travel at 300 million metres per second then the distinction is clear. If the predictions are true how will such radically enhanced and empowered beings live not only the unenhanced but also what will there quality of life really be? In response, Agar favours something what he calls “rational biological conservatism” (pg. 57) where we set limits on how intelligent we can become in light of the fact that it will never be rational to us for human beings to completely upload their minds onto computers.
  • Agar then proceeds to argue that in the pursuit of Kurzweil enhanced capacities and capabilities we might accidentally undermine capacities of equal value. This line of argument would find much sympathy from those who consider human organisms in “ecological” terms, representing a profound interconnectedness which when interfered with presents a series of unknown and unexpected consequences. In other words, our specifies-specific form of intelligence may well be linked to species-specific form of desire. Thus, if we start building upon and enhancing our capacity to protect and promote deeply held convictions and beliefs then due to the interconnectedness, it may well affect and remove our desire to perform such activities (page 70). Agar’s subsequent discussion and reference to the work of Jerry Foder, philosopher and cognitive scientist is particularly helpful in terms of the functioning of the mind by modules and the implications of human-friendly AI verses human-unfriendly AI.
  • In terms of the author’s discussion of Aubrey de Grey, what is refreshing to read from the outset is the author’s clear grasp of Aubrey’s ideas and motivation. Some make the mistake of thinking he is the man who wants to live forever, when in actual fact this is not the case.  De Grey wants to reverse the ageing process - Strategies for Engineered Negligible Senescence (SENS) so that people are living longer and healthier lives. Establishing this clear distinction affords the author the opportunity to offer more grounded critiques of de Grey’s than some of his other critics. The author makes plain that de Grey’s immediate goal is to achieve longevity escape velocity (LEV), where anti-ageing therapies add years to life expectancy faster than age consumes them.
  • In weighing up the benefits of living significantly longer lives, Agar posits a compelling argument that I had not fully seen before. In terms of risk, those radically enhanced to live longer may actually be the most risk adverse and fearful people to live. Taking the example of driving a car, a forty year-old senescing human being who gets into their car to drive to work and is involved in a fatal accident “stands to lose, at most, a few healthy, youthful years and a slightly larger number of years with reduced quality” (p.116). In stark contrast should a negligibly senescent being who drives a car and is involved in an accident resulting in their death, stands to lose on average one thousand, healthy, youthful years (p.116).  
  • De Grey’s response to this seems a little flippant; with the end of ageing comes an increased sense of risk-aversion so the desire for risky activity such as driving will no longer be prevalent. Moreover, plus because we are living for longer we will not be in such a hurry to get to places!  Virtual reality comes into its own at this point as a means by which the negligibly senescent being ‘adrenaline junkie’ can be engaged with activities but without the associated risks. But surely the risk is part of the reason why they would want to engage in snow boarding, bungee jumping et al in the first place. De Grey’s strategy seemingly fails to appreciate the extent to which human beings want “direct” contact with the “real” world.
  • Continuing this idea further though, Agar’s subsequent discussion of the role of fire-fighters is an interesting one.  A negligibly senescent fire fighter may stand to loose more when they are trapped in a burning inferno but being negligibly senescent means that they are better fire-fighters by virtue of increase vitality. Having recently heard de Grey speak and had the privilege of discussing his ideas further with him, Agar’s discussion of De Grey were a particular highlight of the book and made for an engaging discussion. Whilst expressing concern and doubt in relation to De Grey’s ideas, Agar is nevertheless quick and gracious enough to acknowledge that if such therapies could be achieved then De Grey is probably the best person to comment on and achieve such therapies given the depth of knowledge and understanding that he has built up in this area.
Weiye Loh

The future of customer support: Outsourcing is so last year | The Economist - 0 views

  • Gartner, the research company, estimates that using communities to solve support issues can reduce costs by up to 50%. When TomTom, a maker of satellite-navigation systems, switched on social support, members handled 20,000 cases in its first two weeks and saved it around $150,000. Best Buy, an American gadget retailer, values its 600,000 users at $5m annually. 
  •  
    "Unsourcing", as the new trend has been dubbed, involves companies setting up online communities to enable peer-to-peer support among users. Instead of speaking with a faceless person thousands of miles away, customers' problems are answered by individuals in the same country who have bought and used the same products. This happens either on the company's own website or on social networks like Facebook and Twitter, and the helpers are generally not paid anything for their efforts.
Weiye Loh

BBC News - Facebook v academia: The gloves are off - 0 views

  •  
    "But this latest story once again sparked headlines around the world, even if articles often made the point that the research was not peer-reviewed. What was different, however, was Facebook's reaction. Previously, its PR team has gone into overdrive behind the scenes to rubbish this kind of research but said nothing in public. This time they used a new tactic, humour, to undermine the story. Mike Develin, a data scientist for the social network, published a note on Facebook mocking the Princeton team's "innovative use of Google search trends". He went on to use the same techniques to analyse the university's own prospects, concluding that a decline in searches over recent years "suggests that Princeton will have only half its current enrollment by 2018, and by 2021 it will have no students at all". Now, who knows, Facebook may well face an uncertain future. But academics looking to predict its demise have been put on notice - the company employs some pretty smart scientists who may take your research apart and fire back. The gloves are off."
Weiye Loh

The Creativity Crisis - Newsweek - 0 views

  • The accepted definition of creativity is production of something original and useful, and that’s what’s reflected in the tests. There is never one right answer. To be creative requires divergent thinking (generating many unique ideas) and then convergent thinking (combining those ideas into the best result).
  • Torrance’s tasks, which have become the gold standard in creativity assessment, measure creativity perfectly. What’s shocking is how incredibly well Torrance’s creativity index predicted those kids’ creative accomplishments as adults.
  • The correlation to lifetime creative accomplishment was more than three times stronger for childhood creativity than childhood IQ.
  • ...20 more annotations...
  • there is one crucial difference between IQ and CQ scores. With intelligence, there is a phenomenon called the Flynn effect—each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.
  • creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward.
  • It is the scores of younger children in America—from kindergarten through sixth grade—for whom the decline is “most serious.”
  • It’s too early to determine conclusively why U.S. creativity scores are declining. One likely culprit is the number of hours kids now spend in front of the TV and playing videogames rather than engaging in creative activities. Another is the lack of creativity development in our schools. In effect, it’s left to the luck of the draw who becomes creative: there’s no concerted effort to nurture the creativity of all children.
  • Around the world, though, other countries are making creativity development a national priority.
  • In China there has been widespread education reform to extinguish the drill-and-kill teaching style. Instead, Chinese schools are also adopting a problem-based learning approach.
  • When faculty of a major Chinese university asked Plucker to identify trends in American education, he described our focus on standardized curriculum, rote memorization, and nationalized testing.
  • Overwhelmed by curriculum standards, American teachers warn there’s no room in the day for a creativity class.
  • The age-old belief that the arts have a special claim to creativity is unfounded. When scholars gave creativity tasks to both engineering majors and music majors, their scores laid down on an identical spectrum, with the same high averages and standard deviations.
  • The argument that we can’t teach creativity because kids already have too much to learn is a false trade-off. Creativity isn’t about freedom from concrete facts. Rather, fact-finding and deep research are vital stages in the creative process.
  • The lore of pop psychology is that creativity occurs on the right side of the brain. But we now know that if you tried to be creative using only the right side of your brain, it’d be like living with ideas perpetually at the tip of your tongue, just beyond reach.
  • Creativity requires constant shifting, blender pulses of both divergent thinking and convergent thinking, to combine new information with old and forgotten ideas. Highly creative people are very good at marshaling their brains into bilateral mode, and the more creative they are, the more they dual-activate.
  • “Creativity can be taught,” says James C. Kaufman, professor at California State University, San Bernardino. What’s common about successful programs is they alternate maximum divergent thinking with bouts of intense convergent thinking, through several stages. Real improvement doesn’t happen in a weekend workshop. But when applied to the everyday process of work or school, brain function improves.
  • highly creative adults tended to grow up in families embodying opposites. Parents encouraged uniqueness, yet provided stability. They were highly responsive to kids’ needs, yet challenged kids to develop skills. This resulted in a sort of adaptability: in times of anxiousness, clear rules could reduce chaos—yet when kids were bored, they could seek change, too. In the space between anxiety and boredom was where creativity flourished.
  • highly creative adults frequently grew up with hardship. Hardship by itself doesn’t lead to creativity, but it does force kids to become more flexible—and flexibility helps with creativity.
  • In early childhood, distinct types of free play are associated with high creativity. Preschoolers who spend more time in role-play (acting out characters) have higher measures of creativity: voicing someone else’s point of view helps develop their ability to analyze situations from different perspectives. When playing alone, highly creative first graders may act out strong negative emotions: they’ll be angry, hostile, anguished.
  • In middle childhood, kids sometimes create paracosms—fantasies of entire alternative worlds. Kids revisit their paracosms repeatedly, sometimes for months, and even create languages spoken there. This type of play peaks at age 9 or 10, and it’s a very strong sign of future creativity.
  • From fourth grade on, creativity no longer occurs in a vacuum; researching and studying become an integral part of coming up with useful solutions. But this transition isn’t easy. As school stuffs more complex information into their heads, kids get overloaded, and creativity suffers. When creative children have a supportive teacher—someone tolerant of unconventional answers, occasional disruptions, or detours of curiosity—they tend to excel. When they don’t, they tend to underperform and drop out of high school or don’t finish college at high rates.
  • They’re quitting because they’re discouraged and bored, not because they’re dark, depressed, anxious, or neurotic. It’s a myth that creative people have these traits. (Those traits actually shut down creativity; they make people less open to experience and less interested in novelty.) Rather, creative people, for the most part, exhibit active moods and positive affect. They’re not particularly happy—contentment is a kind of complacency creative people rarely have. But they’re engaged, motivated, and open to the world.
  • A similar study of 1,500 middle schoolers found that those high in creative self-efficacy had more confidence about their future and ability to succeed. They were sure that their ability to come up with alternatives would aid them, no matter what problems would arise.
  •  
    The Creativity Crisis For the first time, research shows that American creativity is declining. What went wrong-and how we can fix it.
Jody Poh

U.S. students fight copyright law - 9 views

http://www.nytimes.com/2007/10/11/technology/11iht-download.1.7846678.html?scp=20&sq=copyright&st=Search A student previously fined for breaking copyright laws at Brown University on Rhode Island ...

copyright :file sharing" "Intellectual property rights"

started by Jody Poh on 25 Aug 09 no follow-up yet
Weiye Loh

Rationally Speaking: Should non-experts shut up? The skeptic's catch-22 - 0 views

  • You can read the talk here, but in a nutshell, Massimo was admonishing skeptics who reject the scientific consensus in fields in which they have no technical expertise - the most notable recent example of this being anthropogenic climate change, about which venerable skeptics like James Randi and Michael Shermer have publicly expressed doubts (though Shermer has since changed his mind).
  • I'm totally with Massimo that it seems quite likely that anthropogenic climate change is really happening. But I'm not sure I can get behind Massimo's broader argument that non-experts should defer to the expert consensus in a field.
  • First of all, while there are strong incentives for a researcher to find errors in other work in the field, there are strong disincentives for her to challenge the field's foundational assumptions. It will be extremely difficult for her to get other people to agree with her if she tries, and if she succeeds, she'll still be taking herself down along with the rest of the field.
  • ...7 more annotations...
  • Second of all, fields naturally select for people who accept their foundational assumptions. People who don't accept those assumptions are likely not to have gone into that field in the first place, or to have left it already.
  • Sometimes those foundational assumptions are simple enough that an outsider can evaluate them - for instance, I may not be an expert in astrology or theology, but I can understand their starting premises (stars affect human fates; we should accept the Bible as the truth) well enough to confidently dismiss them, and the fields that rest on them. But when the foundational assumptions get more complex - like the assumption that we can reliably model future temperatures - it becomes much harder for an outsider to judge their soundness.
  • we almost seem to be stuck in a Catch-22: The only people who are qualified to evaluate the validity of a complex field are the ones who have studied that field in depth - in other words, experts. Yet the experts are also the people who have the strongest incentives not to reject the foundational assumptions of the field, and the ones who have self-selected for believing those assumptions. So the closer you are to a field, the more biased you are, which makes you a poor judge of it; the farther away you are, the less relevant knowledge you have, which makes you a poor judge of it. What to do?
  • luckily, the Catch-22 isn't quite as stark as I made it sound. For example, you can often find people who are experts in the particular methodology used by a field without actually being a member of the field, so they can be much more unbiased judges of whether that field is applying the methodology soundly. So for example, a foundational principle underlying a lot of empirical social science research is that linear regression is a valid tool for modeling most phenomena. I strongly recommend asking a statistics professor about that. 
  • there are some general criteria that outsiders can use to evaluate the validity of a technical field, even without “technical scientific expertise” in that field. For example, can the field make testable predictions, and does it have a good track record of predicting things correctly? This seems like a good criterion by which an outsider can judge the field of climate modeling (and "predictions" here includes using your model to predict past data accurately). I don't need to know how the insanely-complicated models work to know that successful prediction is a good sign.
  • And there are other more field-specific criteria outsiders can often use. For example, I've barely studied postmodernism at all, but I don't have to know much about the field to recognize that the fact that they borrow concepts from complex disciplines which they themselves haven't studied is a red flag.
  • the issue with AGW is less the science and all about the political solutions. Most every solution we hear in the public conversation requires some level of sacrifice and uncertainty in the future.Politicians, neither experts in climatology nor economics, craft legislation to solve the problem through the lens of their own political ideology. At TAM8, this was pretty apparent. My honest opinion is that people who are AGW skeptics are mainly skeptics of the political solutions. If AGW was said to increase the GDP of the country by two to three times, I'm guessing you'd see a lot less climate change skeptics.
  •  
    WEDNESDAY, JULY 14, 2010 Should non-experts shut up? The skeptic's catch-22
Weiye Loh

Paul Crowley's Blog - A survey of anti-cryonics writing - 0 views

  • cryonics offers almost eternal life. To its critics, cryonics is pseudoscience; the idea that we could freeze someone today in such a way that future technology might be able to re-animate them is nothing more than wishful thinking on the desire to avoid death. Many who battle nonsense dressed as science have spoken out against it: see for example Nano Nonsense and Cryonics, a 2001 article by celebrated skeptic Michael Shermer; or check the Skeptic’s Dictionary or Quackwatch entries on the subject, or for more detail read the essay Cryonics–A futile desire for everlasting life by “Invisible Flan”.
  • And of course the pro-cryonics people have written reams and reams of material such as Ben Best’s Scientific Justification of Cryonics Practice on why they think this is exactly as plausible as I might think, and going into tremendous technical detail setting out arguments for its plausibility and addressing particular difficulties. It’s almost enough to make you want to sign up on the spot. Except, of course, that plenty of totally unscientific ideas are backed by reams of scientific-sounding documents good enough to fool non-experts like me. Backed by the deep pockets of the oil industry, global warming denialism has produced thousands of convincing-sounding arguments against the scientific consensus on CO2 and AGW. T
  • Nano Nonsense and Cryonics goes for the nitty-gritty right away in the opening paragraph:To see the flaw in this system, thaw out a can of frozen strawberries. During freezing, the water within each cell expands, crystallizes, and ruptures the cell membranes. When defrosted, all the intracellular goo oozes out, turning your strawberries into runny mush. This is your brain on cryonics.This sounds convincing, but doesn’t address what cryonicists actually claim. Ben Best, President and CEO of the Cryonics Institute, replies in the comments:Strawberries (and mammalian tissues) are not turned to mush by freezing because water expands and crystallizes inside the cells. Water crystallizes in the extracellular space because more nucleators are found extracellularly. As water crystallizes in the extracellular space, the extracellular salt concentration increases causing cells to lose water osmotically and shrink. Ultimately the cell membranes are broken by crushing from extracellular ice and/or high extracellular salt concentration. […] Cryonics organizations use vitrification perfusion before cooling to cryogenic temperatures. With good brain perfusion, vitrification can reduce ice formation to negligible amounts.
  • ...6 more annotations...
  • The Skeptic’s Dictionary entry is no advance. Again, it refers erroneously to a “mushy brain”. It points out that the technology to reanimate those in storage does not already exist, but provides no help for us non-experts in assessing whether it is a plausible future technology, like super-fast computers or fusion power, or whether it is as crazy as the sand-powered tank; it simply asserts baldly and to me counterintuitively that it is the latter. Again, perhaps cryonic reanimation is a sand-powered tank, but I can explain to you why a sand-powered tank is implausible if you don’t already know, and if cryonics is in the same league I’d appreciate hearing the explanation.
  • Another part of the article points out the well-known difficulties with whole-body freezing — because the focus is on achieving the best possible preservation of the brain, other parts suffer more. But the reason why the brain is the focus is that you can afford to be a lot bolder in repairing other parts of the body — unlike the brain, if my liver doesn’t survive the freezing, it can be replaced altogether.
  • Further, the article ignores one of the most promising possibilities for reanimation, that of scanning and whole-brain emulation, a route that requires some big advances in computer and scanning technology as well as our understanding of the lowest levels of the brain’s function, but which completely sidesteps any problems with repairing either damage from the freezing process or whatever it was that led to legal death.
  • Sixteen years later, it seems that hasn’t changed; in fact, as far as the issue of technical feasability goes it is starting to look as if on all the Earth, or at least all the Internet, there is not one person who has ever taken the time to read and understand cryonics claims in any detail, still considers it pseudoscience, and has written a paper, article or even a blog post to rebut anything that cryonics advocates actually say. In fact, the best of the comments on my first blog post on the subject are already a higher standard than anything my searches have turned up.
  • I don’t have anything useful to add, I just wanted to say that I feel exactly as you do about cryonics and living forever. And I thought that this statement: I know that I don’t know enough to judge. shows extreme wisdom. If only people wishing to comment on global warming would apply the same test.
  • WRT global warming, the mistake people make is trying to go direct to the first-order evidence, which is much too complicated and too easy to misrepresent to hope to directly interpret unless you make it your life’s work, and even then only in a particular area. The correct thing to do is to collect second-order evidence, such as that every major scientific academy has backed the IPCC.
    • Weiye Loh
       
      First-order evidence vs second-order evidence...
  •  
    Cryonics
1 - 20 of 146 Next › Last »
Showing 20 items per page