Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged AWARE

Rss Feed Group items tagged

Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • rather than being forced to wait for parliament to meet to air their dissent, now opposition parties are able to post pre-emptively their criticisms online, shifting the time and space of Singapore's political debate
  • Singapore's People's Action Party (PAP)-dominated politics are increasingly being contested online and over social media like blogs, Facebook and Twitter. Pushed by the perceived pro-PAP bias of the mainstream media, Singapore's opposition parties are using various new media to communicate with voters and express dissenting views. Alternative news websites, including The Online Citizen and Temasek Review, have won strong followings by presenting more opposition views in their news mix.
  • Despite its democratic veneer, Singapore rates poorly in global press freedom rankings due to a deeply entrenched culture of self-censorship and a pro-state bias in the mainstream media. Reporters Without Borders, a France-based press freedom advocacy group, recently ranked Singapore 136th in its global press freedom rankings, scoring below repressive countries like Iraq and Zimbabwe. The country's main media publishing house, Singapore Press Holdings, is owned by the state and its board of directors is made up largely of PAP members or other government-linked executives. Senior newspaper editors, including at the Straits Times, must be vetted and approved by the PAP-led government.
  • ...3 more annotations...
  • The local papers have a long record of publicly endorsing the PAP-led government's position, according to Tan Tarn How, a research fellow at the Institute of Policy Studies (IPS) and himself a former journalist. In his research paper "Singapore's print media policy - a national success?" published last year he quoted Leslie Fong, a former editor of the Straits Times, saying that the press "should resist the temptation to arrogate itself the role of a watchdog, or permanent critic, of the government of the day".
  • With regularly briefed and supportive editors, there is no need for pre-publication censorship, according to Tan. When the editors are perceived to get things "wrong", the government frequently takes to task, either publicly or privately, the newspaper's editors or individual journalists, he said.
  • The country's main newspaper, the Straits Times, has consistently stood by its editorial decision-making. Editor Han Fook Kwang said last year: "Our circulation is 380,000 and we have a readership of 1.4 million - these are people who buy the paper every day. We're aware people say we're a government mouthpiece or that we are biased but the test is if our readers believe in the paper and continue to buy it."
Weiye Loh

Leong Sze Hian stands corrected? | The Online Citizen - 0 views

  • In your article, you make the argument that “Straits Times Forum Editor, was merely amending his (my) letter to cite the correct statistics. “For example, the Education Minister said “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” - But, Mr Samuel Wee wrote “His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination”.” Kind sir, the statistics state that 1 in 2 are in the top 66.6% (Which, incidentally, includes the top fifth of the bottom 50%!) Does it not stand to reason, then, that if 50% are in the top 66.6%, the remaining 50% are in the bottom 33.3%, as I stated in my letter?
  • Also, perhaps you were not aware of the existence of this resource, but here is a graph from the Straits Times illustrating the fact that only 10% of children from one-to-three room flats make it to university–which is to say, 90% of them don’t. http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf I look forward to your reply, Mr Leong. Thank you for taking the time to read this message.
  • we should, wherever possible, try to agree to disagree, as it is healthy to have and to encourage different viewpoints.
    • Weiye Loh
       
      Does that mean that every viewpoint can and should be accepted as correct to encourage differences? 
  • ...4 more annotations...
  • If I say I think it is fair in Singapore, because half of the bottom one-third of the people make it to the top two-thirds, it does not mean that someone can quote me and say that I said what I said because half the bottom one-third of people did not make it. I think it is alright to say that I do not agree entirely with what was said, because does it also mean on the flip side that half of the bottom one-third of the people did not make it? This is what I mean by quoting one out of context, by using statistics that I did not say, and implying that I did, or by innuendo.
  • Moreover, depending on the methodology, definition, sampling, etc, half of the  bottom one-third of the people making it, does not necessary mean that half did not make it, because some may not be in the population because of various reasons, like emigration, not turning up, transfer, whether adjustments are made  for the mobility of people up or down the social strata over time, etc. If I did not use a particular statistic to state my case, for example, I don’t think it is appropriate to quote me and say that you agree with me by citing statistics from a third party source, like the MOE chart in the Straits Times article, instead of quoting the statistics that I said.
  • I cannot find anything in any of the media reports to say with certainty that the Minister backed up his remarks with direct reference to the MOE chart. There is also nothing in the narrative that only 10 per cent  of children from one-to-three room flats make it to university – which is to say, 90 per cent  of them don’t. The ’90 per cent’ cannot be attributed to what the minister said, as at best it is the writer’s interpretation of the MOE chart.
  • Interesting exchange of letters. Samuel’s interpretation of the statistics provided by Ng Eng Hen and ST is correct. There is little doubt about it. While I can see where Leong Sze Hian is coming from, I don’t totally agree with him. Specifically, Samuel’s first statement (only ~10% of students living in 1-3 room flat make it to university) is directed at ST’s report that education is a good social leveller but not at Ng. It is therefore a valid point to make.
Weiye Loh

ST Forum Editor was right after all | The Online Citizen - 0 views

  • I refer to the article “Straits Times! Why you edit until like that?” (theonlinecitizen, Mar 24). In my view, the Straits Times Forum Editor was not wrong to edit the letter.
  • From a statistical pespective, the forum letter writer, Mr Samuel Wee, was quoting the wrong statistics.
  • For example, the Education Minister said “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” - But, Mr Samuel Wee wrote “His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination”. Another example is Mr Wee’s: “it is indeed heartwarming to learn that only 90% of children from one-to-three-room flats do not make it to university”, when the Straits Times article “New chapter in the Singapore Story”http://pdfcast.org/pdf/new-chapter-in-singapore-story of 8 March, on the Minister’s speech in Parliament, clearly showed in the graph “Progression to Unis and Polys” (Source: MOE  (Ministry of Eduction)), that the “percentage of P1 pupils who lived in 1- to 3-room HDB flats and subsequently progressed to tertiary education”, was about 50 per cent, and not the ’90 per cent who do not make it’ cited by Mr Samuel Wee.
  • ...7 more annotations...
  • The whole point of Samuel Wee’s letter is to present Dr Ng’s statistics from a different angle, so as to show that things are not as rosy as Dr Ng made them seem. As posters above have pointed out, if 50% of poor students score in the top 2/3s, that means the other 50% score in the bottom 1/3. In other words, poor students still score disproportionately lower grades. As for the statistic that 90% of poor students do not make it to university, this was shown a graph provided in the ST. You can see it here: http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf
  • Finally, Dr Ng did say: “[Social mobility] cannot be about neglecting those with abilities, just because they come from middle-income homes or are rich. It cannot mean holding back those who are able so that others can catch up.” Samuel Wee paraphrased this as: “…good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students.” I think it was an accurate paraphrase, because that was essentially what Dr Ng was saying. Samuel Wee’s paraphrase merely makes the callousness of Dr Ng’s remark stand out more clearly.
  • As to Mr Wee’s: “Therefore, it was greatly reassuring to read about Dr Ng’s great faith in our “unique, meritocratic Singapore system”, which ensures that good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students”, there was nothing in the Minister’s speech, Straits Times and all other media reports, that quoted the Minister, in this context. In my opinion, the closest that I could find in all the reports, to link in context to the Minister’s faith in our meritocratic system, was what the Straits Times Forum Editor edited – “Therefore, it was reassuring to read about Dr Ng’s own experience of the ‘unique, meritocratic Singapore system’: he grew up in a three-room flat with five other siblings, and his medical studies at the National University of Singapore were heavily subsidised; later, he trained as a cancer surgeon in the United States using a government scholarship”.
  • To the credit of the Straits Times Forum Editor, inspite of the hundreds of letters that he receives in a day, he took the time and effort to:- Check the accuracy of the letter writer’s ‘quoted’ statistics Find the correct ‘quoted’ statistics to replace the writer’s wrongly ‘quoted’ statistics Check for misquotes out of context (in this case, what the Education Minister actually said), and then find the correct quote to amend the writer’s statement
  • Kind sir, the statistics state that 1 in 2 are in the top 66.6% (Which, incidentally, includes the top fifth of the bottom 50%!) Does it not stand to reason, then, that if 50% are in the top 66.6%, the remaining 50% are in the bottom 33.3%, as I stated in my letter?
  • Also, perhaps you were not aware of the existence of this resource, but here is a graph from the Straits Times illustrating the fact that only 10% of children from one-to-three room flats make it to university–which is to say, 90% of them don’t. http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf
  • The writer made it point to say that only 90% did not make it to university. It has been edited to say 50% made it to university AND POLYTECHNIC. Both are right, and that one is made to make the government look good
Weiye Loh

Roger Pielke Jr.'s Blog: It Is Always the Media's Fault - 0 views

  • Last summer NCAR issued a dramatic press release announcing that oil from the Gulf spill would soon be appearing on the beaches of the Atlantic ocean.  I discussed it here. Here are the first four paragraphs of that press release: BOULDER—A detailed computer modeling study released today indicates that oil from the massive spill in the Gulf of Mexico might soon extend along thousands of miles of the Atlantic coast and open ocean as early as this summer. The modeling results are captured in a series of dramatic animations produced by the National Center for Atmospheric Research (NCAR) and collaborators. he research was supported in part by the National Science Foundation, NCAR’s sponsor. The results were reviewed by scientists at NCAR and elsewhere, although not yet submitted for peer-review publication. “I’ve had a lot of people ask me, ‘Will the oil reach Florida?’” says NCAR scientist Synte Peacock, who worked on the study. “Actually, our best knowledge says the scope of this environmental disaster is likely to reach far beyond Florida, with impacts that have yet to be understood.” The computer simulations indicate that, once the oil in the uppermost ocean has become entrained in the Gulf of Mexico’s fast-moving Loop Current, it is likely to reach Florida's Atlantic coast within weeks. It can then move north as far as about Cape Hatteras, North Carolina, with the Gulf Stream, before turning east. Whether the oil will be a thin film on the surface or mostly subsurface due to mixing in the uppermost region of the ocean is not known.
  • A few weeks ago NCAR's David Hosansky who presumably wrote that press release, asks whether NCAR got it wrong.  His answer?  No, not really: During last year’s crisis involving the massive release of oil into the Gulf of Mexico, NCAR issued a much-watched animation projecting that the oil could reach the Atlantic Ocean. But detectable amounts of oil never made it to the Atlantic, at least not in an easily visible form on the ocean surface. Not surprisingly, we’ve heard from a few people asking whether NCAR got it wrong. These events serve as a healthy reminder of a couple of things: *the difference between a projection and an actual forecast *the challenges of making short-term projections of natural processes that can act chaotically, such as ocean currents
  • What then went wrong? First, the projection. Scientists from NCAR, the Department of Energy’s Los Alamos National Laboratory, and IFM-GEOMAR in Germany did not make a forecast of where the oil would go. Instead, they issued a projection. While there’s not always a clear distinction between the two, forecasts generally look only days or hours into the future and are built mostly on known elements (such as the current amount of humidity in the atmosphere). Projections tend to look further into the future and deal with a higher number of uncertainties (such as the rate at which oil degrades in open waters and the often chaotic movements of ocean currents). Aware of the uncertainties, the scientific team projected the likely path of the spill with a computer model of a liquid dye. They used dye rather than actual oil, which undergoes bacterial breakdown, because a reliable method to simulate that breakdown was not available. As it turned out, the oil in the Gulf broke down quickly due to exceptionally strong bacterial action and, to some extent, the use of chemical dispersants.
  • ...3 more annotations...
  • Second, the challenges of short-term behavior. The Gulf's Loop Current acts as a conveyor belt, moving from the Yucatan through the Florida Straits into the Atlantic. Usually, the current curves northward near the Louisiana and Mississippi coasts—a configuration that would have put it on track to pick up the oil and transport it into open ocean. However, the current’s short-term movements over a few weeks or even months are chaotic and impossible to predict. Sometimes small eddies, or mini-currents, peel off, shifting the position and strength of the main current. To determine the threat to the Atlantic, the research team studied averages of the Loop Current’s past behavior in order to simulate its likely course after the spill and ran several dozen computer simulations under various scenarios. Fortunately for the East Coast, the Loop Current did not behave in its usual fashion but instead remained farther south than usual, which kept it far from the Louisiana and Mississippi coast during the crucial few months before the oil degraded and/or was dispersed with chemical treatments.
  • The Loop Current typically goes into a southern configuration about every 6 to 19 months, although it rarely remains there for very long. NCAR scientist Synte Peacock, who worked on the projection, explains that part of the reason the current is unpredictable is “no two cycles of the Loop Current are ever exactly the same." She adds that the cycles are influenced by such variables as how large the eddy is, where the current detaches and moves south, and how long it takes for the current to reform. Computer models can simulate the currents realistically, she adds. But they cannot predict when the currents will change over to a new cycle. The scientists were careful to explain that their simulations were a suite of possible trajectories demonstrating what was likely to happen, but not a definitive forecast of what would happen. They reiterated that point in a peer-reviewed study on the simulations that appeared last August in Environmental Research Letters. 
  • So who was at fault?  According to Hosansky it was those dummies in the media: These caveats, however, got lost in much of the resulting media coverage.Another perspective is that having some of these caveats in the press release might have been a good idea.
Sonny Cher

Who Says Smoking Pot is Illegal? - 1 views

I have always been addicted to marijuana. It started out with my friends at high school, since then I cannot turn myself away from experiencing high times puffing marajuana. It feels so nice. Howev...

marajuana

started by Sonny Cher on 01 Jun 11 no follow-up yet
Sonny Cher

Who Says Smoking Pot is Illegal? - 1 views

I have always been addicted to marijuana. It started out with my friends at high school, since then I cannot turn myself away from experiencing high times puffing marajuana. It feels so nice. Howev...

high times

started by Sonny Cher on 16 Jun 11 no follow-up yet
Weiye Loh

Are Facebook's Customers Leaving For Real? - 0 views

  • A more important metric for Facebook is the same one that advertisers need to be looking at before they consider the social network for marketing efforts. How many of those accounts are active and real? It’s less of an issue for Facebook than it is for Twitter but all of the talk of total number of accounts in a social network is starting to sound like TV’s old mantra of how many households they reach. It’s an empty number that anyone who is doing even a little thinking will see as hype and not truly important.
  • there were suggestions of doom for Facebook and the concern that growth had stopped unless they get into China
  • The possibility of burn out on the service is considered as well
  • ...1 more annotation...
  • it’s still helpful but if I don’t get to it for a few days I have never felt like I missed anything.
Weiye Loh

journalism.sg » News Corp inquiry raises questions about media accountability - 0 views

  • One is reminded of the pathetic performance of BP CEO Tony Hayward at the US Senate hearings on the Gulf of Mexico oil rig disaster. BP employed 90,000 staff globally. Tony Hayward denied awareness of the cost-cutting obsessions which compromised safety standards across the company. He did not know who approved such dangerous compromises and how such risky operations at sea were supervised. Perhaps he too was betrayed by the people he trusted?
  • Two high profile CEOs of high profile global corporations, both clueless when internal malpractice explodes into world news? Such lame excuses are unacceptable from corporate chiefs and political leaders. They are given too much power over people, resources and policy to be allowed to slither away.
  • Mr Murdoch espoused no particular political ideology. Some might call it a lack of principles. If anything was consistent in his media philosophy, it was profitable opportunism. If that meant pandering to the public appetite for ritual sacrifice of the rich and famous, so be it. If that meant offering a megaphone to Christian fanatics of America’s Bible belt and the kooky Tea Party wing of the Republicans, he would oblige. If bare breasts and titillation will outsell rival tabloids, fine.
Weiye Loh

Age 8 & Wanting A Sex Change - Sky TV - 0 views

  • Despite a gradual change for the better, pre-puberty transgender cases are still a noticeably tabloid-exploitative, morally and ethically ambiguous matter.
  • The only problem is that many young children grow out of the identity confusion when they hit puberty. Oh, and the initial hormone blocker treatment is irreversible.
  • But then that's essentially the crux of the argument: does immaturity necessarily equal a lack of self-awareness? And when exactly is a right time for the all-important gender reassignment?
Weiye Loh

He had 500 offensive photos in his phone - 0 views

  • A man was caught with more than 500 offensive photos in his mobile phone. This happened after a woman complained against him taking a picture of her chest at a shopping centre.
  • "My husband and I were shocked when we were shown the data because there were more than 500 pictures of various women that this man took. All the pictures were of their chests and breasts. From the angle of the shots, I could see that the women in these pictures were not aware that they were victims."
  • According to the law, anyone who takes offensive photos of a woman in a public place without the lady's prior consent can be charged for outrage of modesty. If found guilty, the persons involved faces a fine, up to a year in jail or both.
  • ...1 more annotation...
  • PLEASE! if the pictures taken are "offensive", then the "victims" in the pictures should be charged for INDECENT EXPOSURE! Where is the logic that a picture of a sexy girl is offensive but the same sexy girl walking in public is not offensive?IS IT UPSKIRT? NO! if i take a 18megapix wide lens camera and take the picture of a crowd, then crop out a sexy girl in the picture taken.. is that offensive? whats the diff? it is a publicly taken picture without anyone's consent!!!! IF PEOPLE DRESS SEXILY, THEY MUST BE EXPECTED TO BE OGGLED AND STARED AT!!
    • Weiye Loh
       
      This is a comment by a reader on the news website. I think the issue of privacy here is interesting because technically speaking the 'victims' are in the public. But one can also argue that even though they are in the public, they make no consent to have their photos taken, although consent to be viewed by the public is somehow implied since they willingly step out of their private space. Given that the photos are shots that are aimed at the chests and breasts of women (note that they are not up-skirt or down-blouse shots i.e. no clear legal infringement of peeping), is it wrong for the man to take the photos? The issue of objectification also comes in here since the 'victims' are being objectified based on a certain bodily part/ feature. Is this objectification the 'reason' for victimization? If the women were taken as a whole in the photos, will it still be considered wrong? Personally, I feel that this falls into the grey areas rather than the usual black and white situations (although one can argue that even black and white can be considered shades of grey). I have no answers, but it's still food for thoughts.
Weiye Loh

flaneurose: The KK Chemo Misdosage Incident - 0 views

  • Labelling the pump that dispenses in ml/hr in a different color from the pump that dispenses in ml/day would be an obvious remedy that would have addressed the KK incident. It's the common-sensical solution that anyone can think of.
  • Sometimes, design flaws like that really do occur because engineers can't see the wood for the trees.
  • But sometimes the team is aware of these issues and highlights them to management, but the manufacturer still proceeds as before. Why is that? Because in addition to design principles, one must be mindful that there are always business considerations at play as well. Manufacturing two (or more) separate designs for pumps incurs greater costs, eliminates the ability to standardize across pumps, increases holding inventory, and overall increases complexity of business and manufacturing processes, and decreases economies of scale. All this naturally reduces profitability.It's not just pumps. Even medicines are typically sold in identical-looking vials with identically colored vial caps, with only the text on the vial labels differentiating them in both drug type and concentration. You can imagine what kinds of accidents can potentially happen there.
  • ...2 more annotations...
  • Legally, the manufacturer has clearly labelled on the pump (in text) the appropriate dosing regime, or for a medicine vial, the type of drug and concentration. The manufacturer has hence fulfilled its duty. Therefore, if there are any mistakes in dosing, the liability for the error lies with the hospital and not the manufacturer of the product. The victim of such a dosing error can be said to be an "externalized cost"; the beneficiaries of the victim's suffering are the manufacturer, who enjoys greater profitability, the hospital, which enjoys greater cost-savings, and the public, who save on healthcare. Is it ethical of the manufacturer, to "pass on" liability to the hospital? To make it difficult (or at least not easy) for the hospital to administer the right dosage? Maybe the manufacturer is at fault, but IMHO, it's very hard to say.
  • When a chemo incident like the one that happened in KK occurs, there are cries of public remonstration, and the pendulum may swing the other way. Hospitals might make the decision to purchase more expensive and better designed pumps (that is, if they are available). Then years down the road, when a bureaucrat (or a management consultant) with an eye to trim costs looks through the hospital purchasing orders, they may make the suggestion that $XXX could be saved by buying the generic version of such-and-such a product, instead of the more expensive version. And they would not be wrong, just...myopic.Then the cycle starts again.Sometimes it's not only about human factors. It could be about policy, or human nature, or business fundamentals, or just the plain old, dysfunctional way the world works.
    • Weiye Loh
       
      Interesting article. Explains clearly why our 'ethical' considerations is always only limited to a particular context and specific considerations. 
Weiye Loh

BrainGate gives paralysed the power of mind control | Science | The Observer - 0 views

  • brain-computer interface, or BCI
  • is a branch of science exploring how computers and the human brain can be meshed together. It sounds like science fiction (and can look like it too), but it is motivated by a desire to help chronically injured people. They include those who have lost limbs, people with Lou Gehrig's disease, or those who have been paralysed by severe spinal-cord injuries. But the group of people it might help the most are those whom medicine assumed were beyond all hope: sufferers of "locked-in syndrome".
  • These are often stroke victims whose perfectly healthy minds end up trapped inside bodies that can no longer move. The most famous example was French magazine editor Jean-Dominique Bauby who managed to dictate a memoir, The Diving Bell and the Butterfly, by blinking one eye. In the book, Bauby, who died in 1997 shortly after the book was published, described the prison his body had become for a mind that still worked normally.
  • ...9 more annotations...
  • Now the project is involved with a second set of human trials, pushing the technology to see how far it goes and trying to miniaturise it and make it wireless for a better fit in the brain. BrainGate's concept is simple. It posits that the problem for most patients does not lie in the parts of the brain that control movement, but with the fact that the pathways connecting the brain to the rest of the body, such as the spinal cord, have been broken. BrainGate plugs into the brain, picks up the right neural signals and beams them into a computer where they are translated into moving a cursor or controlling a computer keyboard. By this means, paralysed people can move a robot arm or drive their own wheelchair, just by thinking about it.
  • he and his team are decoding the language of the human brain. This language is made up of electronic signals fired by billions of neurons and it controls everything from our ability to move, to think, to remember and even our consciousness itself. Donoghue's genius was to develop a deceptively small device that can tap directly into the brain and pick up those signals for a computer to translate them. Gold wires are implanted into the brain's tissue at the motor cortex, which controls movement. Those wires feed back to a tiny array – an information storage device – attached to a "pedestal" in the skull. Another wire feeds from the array into a computer. A test subject with BrainGate looks like they have a large plug coming out the top of their heads. Or, as Donoghue's son once described it, they resemble the "human batteries" in The Matrix.
  • BrainGate's highly advanced computer programs are able to decode the neuron signals picked up by the wires and translate them into the subject's desired movement. In crude terms, it is a form of mind-reading based on the idea that thinking about moving a cursor to the right will generate detectably different brain signals than thinking about moving it to the left.
  • The technology has developed rapidly, and last month BrainGate passed a vital milestone when one paralysed patient went past 1,000 days with the implant still in her brain and allowing her to move a computer cursor with her thoughts. The achievement, reported in the prestigious Journal of Neural Engineering, showed that the technology can continue to work inside the human body for unprecedented amounts of time.
  • Donoghue talks enthusiastically of one day hooking up BrainGate to a system of electronic stimulators plugged into the muscles of the arm or legs. That would open up the prospect of patients moving not just a cursor or their wheelchair, but their own bodies.
  • If Nagle's motor cortex was no longer working healthily, the entire BrainGate project could have been rendered pointless. But when Nagle was plugged in and asked to imagine moving his limbs, the signals beamed out with a healthy crackle. "We asked him to imagine moving his arm to the left and to the right and we could hear the activity," Donoghue says. When Nagle first moved a cursor on a screen using only his thoughts, he exclaimed: "Holy shit!"
  • BrainGate and other BCI projects have also piqued the interest of the government and the military. BCI is melding man and machine like no other sector of medicine or science and there are concerns about some of the implications. First, beyond detecting and translating simple movement commands, BrainGate may one day pave the way for mind-reading. A device to probe the innermost thoughts of captured prisoners or dissidents would prove very attractive to some future military or intelligence service. Second, there is the idea that BrainGate or other BCI technologies could pave the way for robot warriors controlled by distant humans using only their minds. At a conference in 2002, a senior American defence official, Anthony Tether, enthused over BCI. "Imagine a warrior with the intellect of a human and the immortality of a machine." Anyone who has seen Terminator might worry about that.
  • Donoghue acknowledges the concerns but has little time for them. When it comes to mind-reading, current BrainGate technology has enough trouble with translating commands for making a fist, let alone probing anyone's mental secrets
  • As for robot warriors, Donoghue was slightly more circumspect. At the moment most BCI research, including BrainGate projects, that touch on the military is focused on working with prosthetic limbs for veterans who have lost arms and legs. But Donoghue thinks it is healthy for scientists to be aware of future issues. "As long as there is a rational dialogue and scientists think about where this is going and what is the reasonable use of the technology, then we are on a good path," he says.
  •  
    The robotic arm clutched a glass and swung it over a series of coloured dots that resembled a Twister gameboard. Behind it, a woman sat entirely immobile in a wheelchair. Slowly, the arm put the glass down, narrowly missing one of the dots. "She's doing that!" exclaims Professor John Donoghue, watching a video of the scene on his office computer - though the woman onscreen had not moved at all. "She actually has the arm under her control," he says, beaming with pride. "We told her to put the glass down on that dot." The woman, who is almost completely paralysed, was using Donoghue's groundbreaking technology to control the robot arm using only her thoughts. Called BrainGate, the device is implanted into her brain and hooked up to a computer to which she sends mental commands. The video played on, giving Donoghue, a silver-haired and neatly bearded man of 62, even more reason to feel pleased. The patient was not satisfied with her near miss and the robot arm lifted the glass again. After a brief hover, the arm positioned the glass on the dot.
Weiye Loh

Rationally Speaking: Evolution as pseudoscience? - 0 views

  • I have been intrigued by an essay by my colleague Michael Ruse, entitled “Evolution and the idea of social progress,” published in a collection that I am reviewing, Biology and Ideology from Descartes to Dawkins (gotta love the title!), edited by Denis Alexander and Ronald Numbers.
  • Ruse's essay in the Alexander-Numbers collection questions the received story about the early evolution of evolutionary theory, which sees the stuff that immediately preceded Darwin — from Lamarck to Erasmus Darwin — as protoscience, the immature version of the full fledged science that biology became after Chuck's publication of the Origin of Species. Instead, Ruse thinks that pre-Darwinian evolutionists really engaged in pseudoscience, and that it took a very conscious and precise effort on Darwin’s part to sweep away all the garbage and establish a discipline with empirical and theoretical content analogous to that of the chemistry and physics of the time.
  • Ruse asserts that many serious intellectuals of the late 18th and early 19th century actually thought of evolution as pseudoscience, and he is careful to point out that the term “pseudoscience” had been used at least since 1843 (by the physiologist Francois Magendie)
  • ...17 more annotations...
  • Ruse’s somewhat surprising yet intriguing claim is that “before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason.”
  • Indeed, the link between evolution and the idea of human social-cultural progress was very strong before Darwin, and was one of the main things Darwin got rid of.
  • The encyclopedist Denis Diderot was typical in this respect: “The Tahitian is at a primary stage in the development of the world, the European is at its old age. The interval separating us is greater than that between the new-born child and the decrepit old man.” Similar nonsensical views can be found in Lamarck, Erasmus, and Chambers, the anonymous author of The Vestiges of the Natural History of Creation, usually considered the last protoscientific book on evolution to precede the Origin.
  • On the other side of the divide were social conservatives like the great anatomist George Cuvier, who rejected the idea of evolution — according to Ruse — not as much on scientific grounds as on political and ideological ones. Indeed, books like Erasmus’ Zoonomia and Chambers’ Vestiges were simply not much better than pseudoscientific treatises on, say, alchemy before the advent of modern chemistry.
  • people were well aware of this sorry situation, so much so that astronomer John Herschel referred to the question of the history of life as “the mystery of mysteries,” a phrase consciously adopted by Darwin in the Origin. Darwin set out to solve that mystery under the influence of three great thinkers: Newton, the above mentioned Herschel, and the philosopher William Whewell (whom Darwin knew and assiduously frequented in his youth)
  • Darwin was a graduate of the University of Cambridge, which had also been Newton’s home. Chuck got drilled early on during his Cambridge education with the idea that good science is about finding mechanisms (vera causa), something like the idea of gravitational attraction underpinning Newtonian mechanics. He reflected that all the talk of evolution up to then — including his grandfather’s — was empty, without a mechanism that could turn the idea into a scientific research program.
  • The second important influence was Herschel’s Preliminary Discourse on the Study of Natural Philosophy, published in 1831 and read by Darwin shortly thereafter, in which Herschel sets out to give his own take on what today we would call the demarcation problem, i.e. what methodology is distinctive of good science. One of Herschel’s points was to stress the usefulness of analogical reasoning
  • Finally, and perhaps most crucially, Darwin also read (twice!) Whewell’s History of the Inductive Sciences, which appeared in 1837. In it, Whewell sets out his notion that good scientific inductive reasoning proceeds by a consilience of ideas, a situation in which multiple independent lines of evidence point to the same conclusion.
  • the first part of the Origin, where Darwin introduces the concept of natural selection by way of analogy with artificial selection can be read as the result of Herschel’s influence (natural selection is the vera causa of evolution)
  • the second part of the book, constituting Darwin's famous “long argument,” applies Whewell’s method of consilience by bringing in evidence from a number of disparate fields, from embryology to paleontology to biogeography.
  • What, then, happened to the strict coupling of the ideas of social and biological progress that had preceded Darwin? While he still believed in the former, the latter was no longer an integral part of evolution, because natural selection makes things “better” only in a relative fashion. There is no meaningful sense in which, say, a large brain is better than very fast legs or sharp claws, as long as you still manage to have dinner and avoid being dinner by the end of the day (or, more precisely, by the time you reproduce).
  • Ruse’s claim that evolution transitioned not from protoscience to science, but from pseudoscience, makes sense to me given the historical and philosophical developments. It wasn’t the first time either. Just think about the already mentioned shift from alchemy to chemistry
  • Of course, the distinction between pseudoscience and protoscience is itself fuzzy, but we do have what I think are clear examples of the latter that cannot reasonably be confused with the former, SETI for one, and arguably Ptolemaic astronomy. We also have pretty obvious instances of pseudoscience (the usual suspects: astrology, ufology, etc.), so the distinction — as long as it is not stretched beyond usefulness — is interesting and defensible.
  • It is amusing to speculate which, if any, of the modern pseudosciences (cryonics, singularitarianism) might turn out to be able to transition in one form or another to actual sciences. To do so, they may need to find their philosophically and scientifically savvy Darwin, and a likely bet — if history teaches us anything — is that, should they succeed in this transition, their mature form will look as different from the original as chemistry and alchemy. Or as Darwinism and pre-Darwinian evolutionism.
  • Darwin called the Origin "one long argument," but I really do think that recognizing that the book contains (at least) two arguments could help to dispel that whole "just a theory" canard. The first half of the book is devoted to demonstrating that natural selection is the true cause of evolution; vera causa arguments require proof that the cause's effect be demonstrated as fact, so the second half of the book is devoted to a demonstration that evolution has really happened. In other words, evolution is a demonstrable fact and natural selection is the theory that explains that fact, just as the motion of the planets is a fact and gravity is a theory that explains it.
  • Cryogenics is the study of the production of low temperatures and the behavior of materials at those temperatures. It is a legitimate branch of physics and has been for a long time. I think you meant 'cryonics'.
  • The Singularity means different things to different people. It is uncharitable to dismiss all "singularitarians" by debunking Kurzweil. He is low hanging fruit. Reach for something higher.
  •  
    "before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason."
Weiye Loh

Rationally Speaking: Ray Kurzweil and the Singularity: visionary genius or pseudoscient... - 0 views

  • I will focus on a single detailed essay he wrote entitled “Superintelligence and Singularity,” which was originally published as chapter 1 of his The Singularity is Near (Viking 2005), and has been reprinted in an otherwise insightful collection edited by Susan Schneider, Science Fiction and Philosophy.
  • Kurzweil begins by telling us that he gradually became aware of the coming Singularity, in a process that, somewhat peculiarly, he describes as a “progressive awakening” — a phrase with decidedly religious overtones. He defines the Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Well, by that definition, we have been through several “singularities” already, as technology has often rapidly and irreversibly transformed our lives.
  • The major piece of evidence for Singularitarianism is what “I [Kurzweil] have called the law of accelerating returns (the inherent acceleration of the rate of evolution, with technological evolution as a continuation of biological evolution).”
  • ...9 more annotations...
  • the first obvious serious objection is that technological “evolution” is in no logical way a continuation of biological evolution — the word “evolution” here being applied with completely different meanings. And besides, there is no scientifically sensible way in which biological evolution has been accelerating over the several billion years of its operation on our planet. So much for scientific accuracy and logical consistency.
  • here is a bit that will give you an idea of why some people think of Singularitarianism as a secular religion: “The Singularity will allow us to transcend [the] limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.”
  • Fig. 2 of that essay shows a progression through (again, entirely arbitrary) six “epochs,” with the next one (#5) occurring when there will be a merger between technological and human intelligence (somehow, a good thing), and the last one (#6) labeled as nothing less than “the universe wakes up” — a nonsensical outcome further described as “patterns of matter and energy in the universe becom[ing] saturated with intelligence processes and knowledge.” This isn’t just science fiction, it is bad science fiction.
  • “a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process.” First, it is highly questionable that one can even measure “technological change” on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of “technological change.” As for the idea that any evolutionary process features exponential growth, I don’t know where Kurzweil got it, but it is simply wrong, for one thing because biological evolution does not have any such feature — as any student of Biology 101 ought to know.
  • Kurzweil’s ignorance of evolution is manifested again a bit later, when he claims — without argument, as usual — that “Evolution is a process of creating patterns of increasing order. ... It’s the evolution of patterns that constitutes the ultimate story of the world. ... Each stage or epoch uses the information-processing methods of the previous epoch to create the next.” I swear, I was fully expecting a scholarly reference to Deepak Chopra at the end of that sentence. Again, “evolution” is a highly heterogeneous term that picks completely different concepts, such as cosmic “evolution” (actually just change over time), biological evolution (which does have to do with the creation of order, but not in Kurzweil’s blatantly teleological sense), and technological “evolution” (which is certainly yet another type of beast altogether, since it requires intelligent design). And what on earth does it mean that each epoch uses the “methods” of the previous one to “create” the next one?
  • As we have seen, the whole idea is that human beings will merge with machines during the ongoing process of ever accelerating evolution, an event that will eventually lead to the universe awakening to itself, or something like that. Now here is the crucial question: how come this has not happened already?
  • To appreciate the power of this argument you may want to refresh your memory about the Fermi Paradox, a serious (though in that case, not a knockdown) argument against the possibility of extraterrestrial intelligent life. The story goes that physicist Enrico Fermi (the inventor of the first nuclear reactor) was having lunch with some colleagues, back in 1950. His companions were waxing poetic about the possibility, indeed the high likelihood, that the galaxy is teeming with intelligent life forms. To which Fermi asked something along the lines of: “Well, where are they, then?”
  • The idea is that even under very pessimistic (i.e., very un-Kurzweil like) expectations about how quickly an intelligent civilization would spread across the galaxy (without even violating the speed of light limit!), and given the mind boggling length of time the galaxy has already existed, it becomes difficult (though, again, not impossible) to explain why we haven’t seen the darn aliens yet.
  • Now, translate that to Kurzweil’s much more optimistic predictions about the Singularity (which allegedly will occur around 2045, conveniently just a bit after Kurzweil’s expected demise, given that he is 63 at the time of this writing). Considering that there is no particular reason to think that planet earth, or the human species, has to be the one destined to trigger the big event, why is it that the universe hasn’t already “awakened” as a result of a Singularity occurring somewhere else at some other time?
Weiye Loh

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • "A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point." So wrote the celebrated Stanford University psychologist Leon Festinger (PDF)
  • How would people so emotionally invested in a belief system react, now that it had been soundly refuted? At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they'd all been spared at the last minute. Festinger summarized the extraterrestrials' new pronouncement: "The little group, sitting all night long, had spread so much light that God had saved the world from destruction." Their willingness to believe in the prophecy had saved Earth from the prophecy!
  • This tendency toward so-called "motivated reasoning" helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, "death panels," the birthplace and religion of the president (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.
  • ...4 more annotations...
  • The theory of motivated reasoning builds on a key insight of modern neuroscience (PDF): Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment. It's a "basic human survival skill," explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.
  • We're not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn't take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that's highly biased, especially on topics we care a great deal about.
  • Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. "They retrieve thoughts that are consistent with their previous beliefs," says Taber, "and that will lead them to build an argument and challenge what they're hearing."
  • when we think we're reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we're being scientists, but we're actually being lawyers (PDF). Our "reasoning" is a means to a predetermined end—winning our "case"—and is shot through with biases. They include "confirmation bias," in which we give greater heed to evidence and arguments that bolster our beliefs, and "disconfirmation bias," in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.
Weiye Loh

Evolutionary analysis shows languages obey few ordering rules - 0 views

  • The authors of the new paper point out just how hard it is to study languages. We're aware of over 7,000 of them, and they vary significantly in complexity. There are a number of large language families that are likely derived from a single root, but a large number of languages don't slot easily into one of the major groups. Against that backdrop, even a set of simple structural decisions—does the noun or verb come first? where does the preposition go?—become dizzyingly complex, with different patterns apparent even within a single language tree.
  • Linguists, however, have been attempting to find order within the chaos. Noam Chomsky helped establish the Generative school of thought, which suggests that there must be some constraints to this madness, some rules that help make a language easier for children to pick up, and hence more likely to persist. Others have approached this issue via a statistical approach (the authors credit those inspired by Joseph Greenberg for this), looking for word-order rules that consistently correlate across language families. This approach has identified a handful of what may be language universals, but our uncertainty about language relationships can make it challenging to know when some of these are correlations are simply derived from a common inheritance.
  • For anyone with a biology background, having traits shared through common inheritance should ring a bell. Evolutionary biologists have long been able to build family trees of related species, called phylogenetic trees. By figuring out what species have the most traits in common and grouping them together, it's possible to identify when certain features have evolved in the past. In recent years, the increase in computing power and DNA sequences to align has led to some very sophisticated phylogenetic software, which can analyze every possible tree and perform a Bayesian statistical analysis to figure out which trees are most likely to represent reality. By treating language features like subject-verb order as a trait, the authors were able to perform this sort of analysis on four different language families: 79 Indo-European languages, 130 Austronesian languages, 66 Bantu languages, and 26 Uto-Aztecan languages. Although we don't have a complete roster of the languages in those families, they include over 2,400 languages that have been evolving for a minimum of 4,000 years.
  • ...4 more annotations...
  • The results are bad news for universalists: "most observed functional dependencies between traits are lineage-specific rather than universal tendencies," according to the authors. The authors were able to identify 19 strong correlations between word order traits, but none of these appeared in all four families; only one of them appeared in more than two. Fifteen of them only occur in a single family. Specific predictions based on the Greenberg approach to linguistics also failed to hold up under the phylogenetic analysis. "Systematic linkages of traits are likely to be the rare exception rather than the rule," the authors conclude.
  • If universal features can't account for what we observe, what can? Common descent. "Cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states."
  • it still leaves a lot of areas open for linguists to argue about. And the study did not build an exhaustive tree of any of the language families, in part because we probably don't have enough information to classify all of them at this point.
  • Still, it's hard to imagine any further details could overturn the gist of things, given how badly features failed to correlate across language families. And the work might be well received in some communities, since it provides an invitation to ask a fascinating question: given that there aren't obvious word order patterns across languages, how does the human brain do so well at learning the rules that are a peculiarity to any one of them?
  •  
    young children can easily learn to master more than one language in an astonishingly short period of time. This has led a number of linguists, most notably Noam Chomsky, to suggest that there might be language universals, common features of all languages that the human brain is attuned to, making learning easier; others have looked for statistical correlations between languages. Now, a team of cognitive scientists has teamed up with an evolutionary biologist to perform a phylogenetic analysis of language families, and the results suggest that when it comes to the way languages order key sentence components, there are no rules.
Weiye Loh

BioMed Central | Full text | Mistaken Identifiers: Gene name errors can be introduced i... - 0 views

  • Background When processing microarray data sets, we recently noticed that some gene names were being changed inadvertently to non-gene names. Results A little detective work traced the problem to default date format conversions and floating-point format conversions in the very useful Excel program package. The date conversions affect at least 30 gene names; the floating-point conversions affect at least 2,000 if Riken identifiers are included. These conversions are irreversible; the original gene names cannot be recovered. Conclusions Users of Excel for analyses involving gene names should be aware of this problem, which can cause genes, including medically important ones, to be lost from view and which has contaminated even carefully curated public databases. We provide work-arounds and scripts for circumventing the problem.
Weiye Loh

Climate Researchers Urged To Use 'Plain Language' - Science News - redOrbit - 0 views

  • James White of the University of Colorado at Boulder told fellow researchers to use plain language when describing their research to a general audience. Focusing on the reports technical details could obscure the basic science. To put it bluntly, “if you put more greenhouse gases in the atmosphere, it will get warmer,” he said. US climate scientist Robert Corell said it was pertinent to try to reach out to all members of society to spread awareness of Arctic melt and the impact it has on the whole world. “Stop speaking in code. Rather than 'anthropogenic,' you could say 'human caused,” Corell said at the conference of nearly 400 scientists.
Weiye Loh

Turning Privacy "Threats" Into Opportunities - Esther Dyson - Project Syndicate - 0 views

  • ost disclosure statements are not designed to be read; they are designed to be clicked on. But some companies actually want their customers to read and understand the statements. They don’t want customers who might sue, and, just in case, they want to be able to prove that the customers did understand the risks. So the leaders in disclosure statements right now tend to be financial and health-care companies – and also space-travel and extreme-sports vendors. They sincerely want to let their customers know what they are getting into, because a regretful customer is a vengeful one. That means making disclosure statements readable. I would suggest turning them into a quiz. The user would not simply click a single button, but would have to select the right button for each question. For example: What are my chances of dying in space? A) 5% B) 30% C) 1-4% (the correct answer, based on experience so far; current spacecraft are believed to be safer.) Now imagine: Who can see my data? A) I can. B) XYZ Corporation. C) XYZ Corporation’s marketing partners. (Click here to see the list.) D) XYZ Corporation’s affiliates and anyone it chooses. As the customer picks answers, she gets a good idea of what is going on. In fact, if you're a marketer, why not dispense with a single right answer and let the consumer specify what she wants to have happen with her data (and corresponding privileges/access rights if necessary)? That’s much more useful than vague policy statements. Suddenly, the disclosure statement becomes a consumer application that adds value to the vendor-consumer relationship.
  • And show the data themselves rather than a description.
  • this is all very easy if you are the site with which the user communicates directly; it is more difficult if you are in the background, a third party collecting information surreptitiously. But that practice should be stopped, anyway.
  • ...4 more annotations...
  • just as they have with Facebook, users will become more familiar with the idea of setting their own privacy preferences and managing their own data. Smart vendors will learn from Facebook; the rest will lose out to competitors. Visualizing the user's information and providing an intelligible interface is an opportunity for competitive advantage.
  • I see this happening already with a number of companies, including some with which I am involved. For example, in its research surveys, 23andMe asks people questions such as how often they have headaches or whether they have ever been exposed to pesticides, and lets them see (in percentages) how other 23andMe users answer the question. This kind of information is fascinating to most people. TripIt lets you compare and match your own travel plans with those of friends. Earndit lets you compete with others to exercise more and win points and prizes.
  • Consumers increasingly expect to be able to see themselves both as individuals and in context. They will feel more comfortable about sharing data if they feel confident that they know what is shared and what is not. The online world will feel like a well-lighted place with shops, newsstands, and the like, where you can see other people and they can see you. Right now, it more often feels like lurking in a spooky alley with a surveillance camera overlooking the scene.
  • Of course, there will be “useful” data that an individual might not want to share – say, how much alcohol they buy, which diseases they have, or certain of their online searches. They will know how to keep such information discreet, just as they might close the curtains to get undressed in their hotel room after enjoying the view from the balcony. Yes, living online takes a little more thought than living offline. But it is not quite as complex once Internet-based services provide the right tools – and once awareness and control of one’s own data become a habit.
  •  
    companies see consumer data as something that they can use to target ads or offers, or perhaps that they can sell to third parties, but not as something that consumers themselves might want. Of course, this is not an entirely new idea, but most pundits on both sides - privacy advocates and marketers - don't realize that rather than protecting consumers or hiding from them, companies should be bringing them into the game. I believe that successful companies will turn personal data into an asset by giving it back to their customers in an enhanced form. I am not sure exactly how this will happen, but current players will either join this revolution or lose out.
« First ‹ Previous 41 - 60 of 60
Showing 20 items per page