Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Open Information

Rss Feed Group items tagged

Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Open science: a future shaped by shared experience | Education | The Observer - 0 views

  • one day he took one of these – finding a mathematical proof about the properties of multidimensional objects – and put his thoughts on his blog. How would other people go about solving this conundrum? Would somebody else have any useful insights? Would mathematicians, notoriously competitive, be prepared to collaborate? "It was an experiment," he admits. "I thought it would be interesting to try."He called it the Polymath Project and it rapidly took on a life of its own. Within days, readers, including high-ranking academics, had chipped in vital pieces of information or new ideas. In just a few weeks, the number of contributors had reached more than 40 and a result was on the horizon. Since then, the joint effort has led to several papers published in journals under the collective pseudonym DHJ Polymath. It was an astonishing and unexpected result.
  • "If you set out to solve a problem, there's no guarantee you will succeed," says Gowers. "But different people have different aptitudes and they know different tricks… it turned out their combined efforts can be much quicker."
  • There are many interpretations of what open science means, with different motivations across different disciplines. Some are driven by the backlash against corporate-funded science, with its profit-driven research agenda. Others are internet radicals who take the "information wants to be free" slogan literally. Others want to make important discoveries more likely to happen. But for all their differences, the ambition remains roughly the same: to try and revolutionise the way research is performed by unlocking it and making it more public.
  • ...10 more annotations...
  • Jackson is a young bioscientist who, like many others, has discovered that the technologies used in genetics and molecular biology, once the preserve of only the most well-funded labs, are now cheap enough to allow experimental work to take place in their garages. For many, this means that they can conduct genetic experiments in a new way, adopting the so-called "hacker ethic" – the desire to tinker, deconstruct, rebuild.
  • The rise of this group is entertainingly documented in a new book by science writer Marcus Wohlsen, Biopunk (Current £18.99), which describes the parallels between today's generation of biological innovators and the rise of computer software pioneers of the 1980s and 1990s. Indeed, Bill Gates has said that if he were a teenager today, he would be working on biotechnology, not computer software.
  • open scientists suggest that it doesn't have to be that way. Their arguments are propelled by a number of different factors that are making transparency more viable than ever.The first and most powerful change has been the use of the web to connect people and collect information. The internet, now an indelible part of our lives, allows like-minded individuals to seek one another out and share vast amounts of raw data. Researchers can lay claim to an idea not by publishing first in a journal (a process that can take many months) but by sharing their work online in an instant.And while the rapidly decreasing cost of previously expensive technical procedures has opened up new directions for research, there is also increasing pressure for researchers to cut costs and deliver results. The economic crisis left many budgets in tatters and governments around the world are cutting back on investment in science as they try to balance the books. Open science can, sometimes, make the process faster and cheaper, showing what one advocate, Cameron Neylon, calls "an obligation and responsibility to the public purse".
  • "The litmus test of openness is whether you can have access to the data," says Dr Rufus Pollock, a co-founder of the Open Knowledge Foundation, a group that promotes broader access to information and data. "If you have access to the data, then anyone can get it, use it, reuse it and redistribute it… we've always built on the work of others, stood on the shoulders of giants and learned from those who have gone before."
  • moves are afoot to disrupt the closed world of academic journals and make high-level teaching materials available to the public. The Public Library of Science, based in San Francisco, is working to make journals more freely accessible
  • it's more than just politics at stake – it's also a fundamental right to share knowledge, rather than hide it. The best example of open science in action, he suggests, is the Human Genome Project, which successfully mapped our DNA and then made the data public. In doing so, it outflanked J Craig Venter's proprietary attempt to patent the human genome, opening up the very essence of human life for science, rather than handing our biological information over to corporate interests.
  • the rise of open science does not please everyone. Critics have argued that while it benefits those at either end of the scientific chain – the well-established at the top of the academic tree or the outsiders who have nothing to lose – it hurts those in the middle. Most professional scientists rely on the current system for funding and reputation. Others suggest it is throwing out some of the most important elements of science and making deep, long-term research more difficult.
  • Open science proponents say that they do not want to make the current system a thing of the past, but that it shouldn't be seen as immutable either. In fact, they say, the way most people conceive of science – as a highly specialised academic discipline conducted by white-coated professionals in universities or commercial laboratories – is a very modern construction.It is only over the last century that scientific disciplines became industrialised and compartmentalised.
  • open scientists say they don't want to throw scientists to the wolves: they just want to help answer questions that, in many cases, are seen as insurmountable.
  • "Some people, very straightforwardly, said that they didn't like the idea because it undermined the concept of the romantic, lone genius." Even the most dedicated open scientists understand that appeal. "I do plan to keep going at them," he says of collaborative projects. "But I haven't given up on solitary thinking about problems entirely."
Weiye Loh

Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Refl... - 0 views

  • The ideal that these nerdy revolutionaries are pursuing is not, as with previous generations—justice, freedom, democracy—rather it is “openness” as in Open Data, Open Information, Open Government. Precisely what is meant by “openness” is never (at least certainly not in the context of this conference) really defined in a form that an outsider could grapple with (and perhaps critique). 
  • the “open data/open government” movement begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations.
  • further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public.
  • ...3 more annotations...
  • t’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. And (it is argued), solutions are available for putting into the hands of these citizens the means/technical tools for sifting and sorting and making critical analyses of government activities if only the key could be turned and government data was “accessible” (“open”).
  • A lot of the conference took place in specialized workshops where the technical details on how to link various sets of this newly available data together with other sets, how to structure this data so that it could serve various purposes and perhaps most importantly how to design the architecture and ontology (ultimately the management policies and procedures) of the data itself within government so that it is “born open” rather than only liberated after the fact with this latter process making the usefulness of the data in the larger world of open and universally accessible data much much greater.
  • it matters very much who the (anticipated) user is since what is being put in place are the frameworks for the data environment  of the future and these will include for the most part some assumptions about who the ultimate user is or will be and whether or not a new “data divide” will emerge written more deeply into the fabric of the Information Society than even the earlier “digital (access) divide”.
Weiye Loh

When information is power, these are the questions we should be asking | Online Journal... - 0 views

  • “There is absolutely no empiric evidence that shows that anyone actually uses the accounts produced by public bodies to make any decision. There is no group of principals analogous to investors. There are many lists of potential users of the accounts. The Treasury, CIPFA (the UK public sector accounting body) and others have said that users might include the public, taxpayers, regulators and oversight bodies. I would be prepared to put up a reward for anyone who could prove to me that any of these people have ever made a decision based on the financial reports of a public body. If there are no users of the information then there is no point in making the reports better. If there are no users more technically correct reports do nothing to improve the understanding of public finances. In effect all that better reports do is legitimise the role of professional accountants in the accountability process.
  • raw data – and the ability to interrogate that – should instead be made available because (quoting Anthony Hopwood): “Those with the power to determine what enters into organisational accounts have the means to articulate and diffuse their values and concerns, and subsequently to monitor, observe and regulate the actions of those that are now accounted for.”
  • Data is not just some opaque term; something for geeks: it’s information: the raw material we deal in as journalists. Knowledge. Power. The site of a struggle for control. And considering it’s a site that journalists have always fought over, it’s surprisingly placid as we enter one of the most important ages in the history of information control.
  • ...1 more annotation...
  • 3 questions to ask of any transparency initiative: If information is to be published in a database behind a form, then it’s hidden in plain sight. It cannot be easily found by a journalist, and only simple questions will be answered. If information is to be published in PDFs or JPEGs, or some format that you need proprietary software to see, then it cannot be easily be questioned by a journalist If you will have to pass a test to use the information, then obstacles will be placed between the journalist and that information The next time an organisation claims that they are opening up their information, tick those questions off. (If you want more, see Gurstein’s list of 7 elements that are needed to make effective use of open data).
  •  
    control of information still represents the exercise of power, and how shifts in that control as a result of the transparency/open data/linked data agenda are open to abuse, gaming, or spin.
Weiye Loh

The importance of culture change in open government | Government In The Lab - 0 views

  • Open government cannot succeed through technology only.  Open data, ideation platforms, cloud solutions, and social media are great tools but when they are used to deliver government services using existing models they can only deliver partial value, value which can not be measured and value that is unclear to anyone but the technology practitioners that are delivering the services.
  • It is this thinking that has led a small group of us to launch a new Group on Govloop called Culture Change and Open Government.  Bill Brantley wrote a great overview of the group which notes that “The purpose of this group is to create an international community of practice devoted to discussing how to use cultural change to bring about open government and to use this site to plan and stage unconferences devoted to cultural change“
  • “Open government is a citizen-centric philosophy and strategy that believes the best results are usually driven by partnerships between citizens and government, at all levels. It is focused entirely on achieving goals through increased efficiency, better management, information transparency, and citizen engagement and most often leverages newer technologies to achieve the desired outcomes. This is bringing business approaches, business technologies, to government“.
  •  
    open government has primarily been the domain of the technologist.  Other parts of the organization have not been considered, have not been educated, have not been organized around a new way of thinking, a new way of delivering value.  The organizational model, the culture itself, has not been addressed, the value of open government is not understood, it is not measurable, and it is not an approach that the majority of those in and around government have bought into.
Weiye Loh

Royal Society launches study on openness in science | Royal Society - 0 views

  • Science as a public enterprise: opening up scientific information will look at how scientific information should best be managed to improve the quality of research and build public trust.
  • “Science has always been about open debate. But incidents such as the UEA email leaks have prompted the Royal Society to look at how open science really is.  With the advent of the Internet, the public now expect a greater degree of transparency. The impact of science on people’s lives, and the implications of scientific assessments for society and the economy are now so great that  people won’t just believe scientists when they say “trust me, I’m an expert.” It is not just scientists who want to be able to see inside scientific datasets, to see how robust they are and ask difficult questions about their implications. Science has to adapt.”
  • The study will look at questions such as: What are the benefits and risks of openly sharing scientific data? How does the rise of the blogosphere change scientific research? What responsibility should scientists, their institutions and the funders of research have for open data? How do we make information more accessible and who will pay to do it? Should privately funded scientists be held to the same standards as those who are publicly funded? How do we balance openness against intellectual property rights and in the case of medical information how do protect patient confidentiality?  Will the same rules apply to scientists across the world?
  • ...1 more annotation...
  • “Different scientific disciplines share their information very differently.  The human genome project was incredibly open in how data were shared. But in biomedical science you also have drug trials conducted where no results are made public.” 
Weiye Loh

Does "Inclusion" Matter for Open Government? (The Answer Is, Very Much Indeed... - 0 views

  • But in the context of the Open Government Partnership and the 70 or so countries that have already committed themselves to this or are in the process I’m not sure that the world can afford to wait to see whether this correlation is direct, indirect or spurious especially if we can recognize that in the world of OGP, the currency of accumulation and concentration is not raw economic wealth but rather raw political power.
  • in the same way as there appears to be an association between the rise of the Internet and increasing concentrations of wealth one might anticipate that the rise of Internet enabled structures of government might be associated with the increasing concentration of political power in fewer and fewer hands and particularly the hands of those most adept at manipulating the artifacts and symbols of the new Internet age.
  • I am struck by the fact that while the OGP over and over talks about the importance and value and need for Open Government there is no similar or even partial call for Inclusive Government.  I’ve argued elsewhere how “Open”, in the absence of attention being paid to ensuring that the pre-conditions for the broadest base of participation will almost inevitably lead to the empowerment of the powerful. What I fear with the OGP is that by not paying even a modicum of attention to the issue of inclusion or inclusive development and participation that all of the idealism and energy that is displayed today in Brasilia is being directed towards the creation of the Governance equivalents of the Internet billionaires whatever that might look like.
  • ...1 more annotation...
  • crowd sourced public policy
  •  
    alongside the rise of the Internet and the empowerment of the Internet generation has emerged the greatest inequalities of wealth and privilege that any of the increasingly Internet enabled economies/societies have experienced at least since the great Depression and perhaps since the beginnings of systematic economic record keeping.  The association between the rise of inequality and the rise of the Internet has not yet been explained and if may simply be a coincidence but somehow I'm doubtful and we await a newer generation of rather more critical and less dewey economists to give us the models and explanations for this co-evolution.
Weiye Loh

Secrecy in the age of WikiLeaks - 1 views

  •  
    As government agencies look to leverage new technologies to communicate with the public, move more citizen services online, share services amongst agencies, share intelligence for national security purposes and collaborate with other nations and private industry, they will need to take a more open stance to secrecy and information sharing. But to mitigate risks, they need to take a more solid security stance at the same time. It is imperative for leaders at all levels within government (agencies, departments, contractors, etc.) to weigh the risks and benefits of making information more accessible and, once decided, put strong safeguards in place to ensure only those who need access can get access. Information leaks imply failures across multiple areas, particularly risk management, access control and confidentiality. The ongoing WikiLeaks exposé clearly shows that the threat is not always from external groups; it can be far more insidious when it stems from trusted individuals within an organisation.
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

TPM: The Philosophers' Magazine | Is morality relative? Depends on your personality - 0 views

  • no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there.
  • If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices.
  • in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here
  • ...8 more annotations...
  • when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views.
  • consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
  • Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
  • Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
  • People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
  • To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism.
  • The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively.
  • If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality? The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists.
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

Office of Science & Technology - Democracy's Open Secret - 0 views

  • there is a deeper issue here that spans political parties across nations:  a lack of recognition among policy makers of their dependence on experts in making wise decisions.  Experts do not, of course, determine how policy decisions ought to be made but they do add considerable value to wise decision making.
  • The deeper issue at work here is an open secret in the practice of democracy, and that is the fact that our elected leaders are chosen from among us, the people.  As such, politicians tend to reflect the views of the general public on many subjects - not just those subjects governed solely by political passions, but also those that are traditionally the province of experts.  Elected officials are not just a lot like us, they are us.
  • For example, perhaps foreshadowing contemporary US politics, in 1996 a freshman member of the US Congress proposed eliminating the US government's National Weather Service , declaring that the agency was not needed because "I get my weather from The Weather Channel."  Of course the weather informaton found on The Weather Channel comes from a sophisticated scientific and technological infrastructure built by the federal government over many decades which supports a wide range of economic activity, from agriculture to airlines, as well as from the private sector weather services.
  • ...7 more annotations...
  • European politicians have their own blind spots at the interface of science and policy.  For instance, several years ago former German environment minister Sigmar Gabriel claimed rather implausibly that: "You can build 100 coal-fired power plants and don't have to have higher CO2 emissions."  His explanation was that Germany participates in emissions trading and this would necessarily limit carbon dioxide no matter how much was produced. Obviously, emissions trading cannot make the impossible possible.
  • We should expect policy makers to face difficulties when it comes to governance when it involves considerations of science, technology, and innovation for the simple reason that they are just like everyone else -- mostly ignorant about mostly everything.
  • in 2010, the US NSF reported that 28% of Americans and 34% of Europeans believed that the sun goes around the earth.  Similarly, 30% of Americans and 41% of Europeans believe that radioactivity results only from human activities.  It should not be so surprising when we learn that policy makers may share such perspectives.
  • A popular view is that more education about science and technology will lead to better decisions.  While education is, of course, important to a healthy democracy, it will never result in a populace (or their representatives) with expertise in everything.  
  • Achieving such heroic levels of expertise is not realistic for anyone.  Instead, we must rely on specialized experts to inform decision making. Just as you and I often need to consult with experts when dealing with our health, home repairs, finances, and other tasks, so too do policy makers need to tap into expertise in order to make good decisions.
  • it should be far less worrisome that the public or policy makers do not understand this or that information that experts may know well.  What should be of more concern is that policy makers appear to lack an understanding of how they can tap into expertise to inform decision making.  This situation is akin to flying blind. Specialized expertise typically does not compel particular decisions, but it does help to make decisions more informed.  This distinction lies behind Winston Churchill's oft-cited advice that science should be "on tap, but not on top." Effective governance does not depend upon philosopher kings in governments or in the populace, but rather on the use of effective mechanisms for bringing expertise into the political process.
  • It is the responsibility - even the special expertise - of policy makers to know how to use the instruments of government to bring experts into the process of governance. The troubling aspect of the statements and actions by the Gummers, Gabriels, and Bachmanns of the political world lies not in their lack of knowledge about science, but in their lack of knowledge about government.
Weiye Loh

Religion: Faith in science : Nature News - 0 views

  • The Templeton Foundation claims to be a friend of science. So why does it make so many researchers uneasy?
  • With a current endowment estimated at US$2.1 billion, the organization continues to pursue Templeton's goal of building bridges between science and religion. Each year, it doles out some $70 million in grants, more than $40 million of which goes to research in fields such as cosmology, evolutionary biology and psychology.
  • however, many scientists find it troubling — and some see it as a threat. Jerry Coyne, an evolutionary biologist at the University of Chicago, Illinois, calls the foundation "sneakier than the creationists". Through its grants to researchers, Coyne alleges, the foundation is trying to insinuate religious values into science. "It claims to be on the side of science, but wants to make faith a virtue," he says.
  • ...25 more annotations...
  • But other researchers, both with and without Templeton grants, say that they find the foundation remarkably open and non-dogmatic. "The Templeton Foundation has never in my experience pressured, suggested or hinted at any kind of ideological slant," says Michael Shermer, editor of Skeptic, a magazine that debunks pseudoscience, who was hired by the foundation to edit an essay series entitled 'Does science make belief in God obsolete?'
  • The debate highlights some of the challenges facing the Templeton Foundation after the death of its founder in July 2008, at the age of 95.
  • With the help of a $528-million bequest from Templeton, the foundation has been radically reframing its research programme. As part of that effort, it is reducing its emphasis on religion to make its programmes more palatable to the broader scientific community. Like many of his generation, Templeton was a great believer in progress, learning, initiative and the power of human imagination — not to mention the free-enterprise system that allowed him, a middle-class boy from Winchester, Tennessee, to earn billions of dollars on Wall Street. The foundation accordingly allocates 40% of its annual grants to programmes with names such as 'character development', 'freedom and free enterprise' and 'exceptional cognitive talent and genius'.
  • Unlike most of his peers, however, Templeton thought that the principles of progress should also apply to religion. He described himself as "an enthusiastic Christian" — but was also open to learning from Hinduism, Islam and other religious traditions. Why, he wondered, couldn't religious ideas be open to the type of constructive competition that had produced so many advances in science and the free market?
  • That question sparked Templeton's mission to make religion "just as progressive as medicine or astronomy".
  • Early Templeton prizes had nothing to do with science: the first went to the Catholic missionary Mother Theresa of Calcutta in 1973.
  • By the 1980s, however, Templeton had begun to realize that fields such as neuroscience, psychology and physics could advance understanding of topics that are usually considered spiritual matters — among them forgiveness, morality and even the nature of reality. So he started to appoint scientists to the prize panel, and in 1985 the award went to a research scientist for the first time: Alister Hardy, a marine biologist who also investigated religious experience. Since then, scientists have won with increasing frequency.
  • "There's a distinct feeling in the research community that Templeton just gives the award to the most senior scientist they can find who's willing to say something nice about religion," says Harold Kroto, a chemist at Florida State University in Tallahassee, who was co-recipient of the 1996 Nobel Prize in Chemistry and describes himself as a devout atheist.
  • Yet Templeton saw scientists as allies. They had what he called "the humble approach" to knowledge, as opposed to the dogmatic approach. "Almost every scientist will agree that they know so little and they need to learn," he once said.
  • Templeton wasn't interested in funding mainstream research, says Barnaby Marsh, the foundation's executive vice-president. Templeton wanted to explore areas — such as kindness and hatred — that were not well known and did not attract major funding agencies. Marsh says Templeton wondered, "Why is it that some conflicts go on for centuries, yet some groups are able to move on?"
  • Templeton's interests gave the resulting list of grants a certain New Age quality (See Table 1). For example, in 1999 the foundation gave $4.6 million for forgiveness research at the Virginia Commonwealth University in Richmond, and in 2001 it donated $8.2 million to create an Institute for Research on Unlimited Love (that is, altruism and compassion) at Case Western Reserve University in Cleveland, Ohio. "A lot of money wasted on nonsensical ideas," says Kroto. Worse, says Coyne, these projects are profoundly corrupting to science, because the money tempts researchers into wasting time and effort on topics that aren't worth it. If someone is willing to sell out for a million dollars, he says, "Templeton is there to oblige him".
  • At the same time, says Marsh, the 'dean of value investing', as Templeton was known on Wall Street, had no intention of wasting his money on junk science or unanswerables such as whether God exists. So before pursuing a scientific topic he would ask his staff to get an assessment from appropriate scholars — a practice that soon evolved into a peer-review process drawing on experts from across the scientific community.
  • Because Templeton didn't like bureaucracy, adds Marsh, the foundation outsourced much of its peer review and grant giving. In 1996, for example, it gave $5.3 million to the American Association for the Advancement of Science (AAAS) in Washington DC, to fund efforts that work with evangelical groups to find common ground on issues such as the environment, and to get more science into seminary curricula. In 2006, Templeton gave $8.8 million towards the creation of the Foundational Questions Institute (FQXi), which funds research on the origins of the Universe and other fundamental issues in physics, under the leadership of Anthony Aguirre, an astrophysicist at the University of California, Santa Cruz, and Max Tegmark, a cosmologist at the Massachusetts Institute of Technology in Cambridge.
  • But external peer review hasn't always kept the foundation out of trouble. In the 1990s, for example, Templeton-funded organizations gave book-writing grants to Guillermo Gonzalez, an astrophysicist now at Grove City College in Pennsylvania, and William Dembski, a philosopher now at the Southwestern Baptist Theological Seminary in Fort Worth, Texas. After obtaining the grants, both later joined the Discovery Institute — a think-tank based in Seattle, Washington, that promotes intelligent design. Other Templeton grants supported a number of college courses in which intelligent design was discussed. Then, in 1999, the foundation funded a conference at Concordia University in Mequon, Wisconsin, in which intelligent-design proponents confronted critics. Those awards became a major embarrassment in late 2005, during a highly publicized court fight over the teaching of intelligent design in schools in Dover, Pennsylvania. A number of media accounts of the intelligent design movement described the Templeton Foundation as a major supporter — a charge that Charles Harper, then senior vice-president, was at pains to deny.
  • Some foundation officials were initially intrigued by intelligent design, Harper told The New York Times. But disillusionment set in — and Templeton funding stopped — when it became clear that the theory was part of a political movement from the Christian right wing, not science. Today, the foundation website explicitly warns intelligent-design researchers not to bother submitting proposals: they will not be considered.
  • Avowedly antireligious scientists such as Coyne and Kroto see the intelligent-design imbroglio as a symptom of their fundamental complaint that religion and science should not mix at all. "Religion is based on dogma and belief, whereas science is based on doubt and questioning," says Coyne, echoing an argument made by many others. "In religion, faith is a virtue. In science, faith is a vice." The purpose of the Templeton Foundation is to break down that wall, he says — to reconcile the irreconcilable and give religion scholarly legitimacy.
  • Foundation officials insist that this is backwards: questioning is their reason for being. Religious dogma is what they are fighting. That does seem to be the experience of many scientists who have taken Templeton money. During the launch of FQXi, says Aguirre, "Max and I were very suspicious at first. So we said, 'We'll try this out, and the minute something smells, we'll cut and run.' It never happened. The grants we've given have not been connected with religion in any way, and they seem perfectly happy about that."
  • John Cacioppo, a psychologist at the University of Chicago, also had concerns when he started a Templeton-funded project in 2007. He had just published a paper with survey data showing that religious affiliation had a negative correlation with health among African-Americans — the opposite of what he assumed the foundation wanted to hear. He was bracing for a protest when someone told him to look at the foundation's website. They had displayed his finding on the front page. "That made me relax a bit," says Cacioppo.
  • Yet, even scientists who give the foundation high marks for openness often find it hard to shake their unease. Sean Carroll, a physicist at the California Institute of Technology in Pasadena, is willing to participate in Templeton-funded events — but worries about the foundation's emphasis on research into 'spiritual' matters. "The act of doing science means that you accept a purely material explanation of the Universe, that no spiritual dimension is required," he says.
  • It hasn't helped that Jack Templeton is much more politically and religiously conservative than his father was. The foundation shows no obvious rightwards trend in its grant-giving and other activities since John Templeton's death — and it is barred from supporting political activities by its legal status as a not-for-profit corporation. Still, many scientists find it hard to trust an organization whose president has used his personal fortune to support right-leaning candidates and causes such as the 2008 ballot initiative that outlawed gay marriage in California.
  • Scientists' discomfort with the foundation is probably inevitable in the current political climate, says Scott Atran, an anthropologist at the University of Michigan in Ann Arbor. The past 30 years have seen the growing power of the Christian religious right in the United States, the rise of radical Islam around the world, and religiously motivated terrorist attacks such as those in the United States on 11 September 2001. Given all that, says Atran, many scientists find it almost impossible to think of religion as anything but fundamentalism at war with reason.
  • the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'
  • Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists.
  • The peer-review and grant-making system has also been revamped: whereas in the past the foundation ran an informal mix of projects generated by Templeton and outside grant seekers, the system is now organized around an annual list of explicit funding priorities.
  • The foundation is still a work in progress, says Jack Templeton — and it always will be. "My father believed," he says, "we were all called to be part of an ongoing creative process. He was always trying to make people think differently." "And he always said, 'If you're still doing today what you tried to do two years ago, then you're not making progress.'" 
Weiye Loh

Arianna Huffington: The Media Gets It Wrong on WikiLeaks: It's About Broken Trust, Not ... - 0 views

  • Too much of the coverage has been meta -- focusing on questions about whether the leaks were justified, while too little has dealt with the details of what has actually been revealed and what those revelations say about the wisdom of our ongoing effort in Afghanistan. There's a reason why the administration is so upset about these leaks.
  • True, there hasn't been one smoking-gun, bombshell revelation -- but that's certainly not to say the cables haven't been revealing. What there has been instead is more of the consistent drip, drip, drip of damning details we keep getting about the war.
  • It's notable that the latest leaks came out the same week President Obama went to Afghanistan for his surprise visit to the troops -- and made a speech about how we are "succeeding" and "making important progress" and bound to "prevail."
  • ...16 more annotations...
  • The WikiLeaks cables present quite a different picture. What emerges is one reality (the real one) colliding with another (the official one). We see smart, good-faith diplomats and foreign service personnel trying to make the truth on the ground match up to the one the administration has proclaimed to the public. The cables show the widening disconnect. It's like a foreign policy Ponzi scheme -- this one fueled not by the public's money, but the public's acquiescence.
  • The second aspect of the story -- the one that was the focus of the symposium -- is the changing relationship to government that technology has made possible.
  • Back in the year 2007, B.W. (Before WikiLeaks), Barack Obama waxed lyrical about government and the internet: "We have to use technology to open up our democracy. It's no coincidence that one of the most secretive administrations in our history has favored special interest and pursued policy that could not stand up to the sunlight."
  • Not long after the election, in announcing his "Transparency and Open Government" policy, the president proclaimed: "Transparency promotes accountability and provides information for citizens about what their Government is doing. Information maintained by the Federal Government is a national asset." Cut to a few years later. Now that he's defending a reality that doesn't match up to, well, reality, he's suddenly not so keen on the people having a chance to access this "national asset."
  • Even more wikironic are the statements by his Secretary of State who, less than a year ago, was lecturing other nations about the value of an unfettered and free internet. Given her description of the WikiLeaks as "an attack on America's foreign policy interests" that have put in danger "innocent people," her comments take on a whole different light. Some highlights: In authoritarian countries, information networks are helping people discover new facts and making governments more accountable... technologies with the potential to open up access to government and promote transparency can also be hijacked by governments to crush dissent and deny human rights... As in the dictatorships of the past, governments are targeting independent thinkers who use these tools. Now "making government accountable" is, as White House spokesman Robert Gibbs put it, a "reckless and dangerous action."
  • ay Rosen, one of the participants in the symposium, wrote a brilliant essay entitled "From Judith Miller to Julian Assange." He writes: For the portion of the American press that still looks to Watergate and the Pentagon Papers for inspiration, and that considers itself a check on state power, the hour of its greatest humiliation can, I think, be located with some precision: it happened on Sunday, September 8, 2002. That was when the New York Times published Judith Miller and Michael Gordon's breathless, spoon-fed -- and ultimately inaccurate -- account of Iraqi attempts to buy aluminum tubes to produce fuel for a nuclear bomb.
  • Miller's after-the-facts-proved-wrong response, as quoted in a Michael Massing piece in the New York Review of Books, was: "My job isn't to assess the government's information and be an independent intelligence analyst myself. My job is to tell readers of The New York Times what the government thought about Iraq's arsenal." In other words, her job is to tell citizens what their government is saying, not, as Obama called for in his transparency initiative, what their government is doing.
  • As Jay Rosen put it: Today it is recognized at the Times and in the journalism world that Judy Miller was a bad actor who did a lot of damage and had to go. But it has never been recognized that secrecy was itself a bad actor in the events that led to the collapse, that it did a lot of damage, and parts of it might have to go. Our press has never come to terms with the ways in which it got itself on the wrong side of secrecy as the national security state swelled in size after September 11th.
  • And in the WikiLeaks case, much of media has again found itself on the wrong side of secrecy -- and so much of the reporting about WikiLeaks has served to obscure, to conflate, to mislead. For instance, how many stories have you heard or read about all the cables being "dumped" in "indiscriminate" ways with no attempt to "vet" and "redact" the stories first. In truth, only just over 1,200 of the 250,000 cables have been released, and WikiLeaks is now publishing only those cables vetted and redacted by their media partners, which includes the New York Times here and the Guardian in England.
  • The establishment media may be part of the media, but they're also part of the establishment. And they're circling the wagons. One method they're using, as Andrew Rasiej put it after the symposium, is to conflate the secrecy that governments use to operate and the secrecy that is used to hide the truth and allow governments to mislead us.
  • Nobody, including WikiLeaks, is promoting the idea that government should exist in total transparency,
  • Assange himself would not disagree. "Secrecy is important for many things," he told Time's Richard Stengel. "We keep secret the identity of our sources, as an example, take great pains to do it." At the same time, however, secrecy "shouldn't be used to cover up abuses."
  • Decentralizing government power, limiting it, and challenging it was the Founders' intent and these have always been core conservative principles. Conservatives should prefer an explosion of whistleblower groups like WikiLeaks to a federal government powerful enough to take them down. Government officials who now attack WikiLeaks don't fear national endangerment, they fear personal embarrassment. And while scores of conservatives have long promised to undermine or challenge the current monstrosity in Washington, D.C., it is now an organization not recognizably conservative that best undermines the political establishment and challenges its very foundations.
  • It is not, as Simon Jenkins put it in the Guardian, the job of the media to protect the powerful from embarrassment. As I said at the symposium, its job is to play the role of the little boy in The Emperor's New Clothes -- brave enough to point out what nobody else is willing to say.
  • When the press trades truth for access, it is WikiLeaks that acts like the little boy. "Power," wrote Jenkins, "loathes truth revealed. When the public interest is undermined by the lies and paranoia of power, it is disclosure that takes sanity by the scruff of its neck and sets it back on its feet."
  • A final aspect of the story is Julian Assange himself. Is he a visionary? Is he an anarchist? Is he a jerk? This is fun speculation, but why does it have an impact on the value of the WikiLeaks revelations?
Weiye Loh

The Black Swan of Cairo | Foreign Affairs - 0 views

  • It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability "tail risks" to disappear from policymakers' fields of observation.
  • Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups. And it is the same misperception of the properties of natural systems that led to both the economic crisis of 2007-8 and the current turmoil in the Arab world. The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying.
  • ...21 more annotations...
  • Just as a robust economic system is one that encourages early failures (the concepts of "fail small" and "fail fast"), the U.S. government should stop supporting dictatorial regimes for the sake of pseudostability and instead allow political noise to rise to the surface. Making an economy robust in the face of business swings requires allowing risk to be visible; the same is true in politics.
  • Both the recent financial crisis and the current political crisis in the Middle East are grounded in the rise of complexity, interdependence, and unpredictability. Policymakers in the United Kingdom and the United States have long promoted policies aimed at eliminating fluctuation -- no more booms and busts in the economy, no more "Iranian surprises" in foreign policy. These policies have almost always produced undesirable outcomes. For example, the U.S. banking system became very fragile following a succession of progressively larger bailouts and government interventions, particularly after the 1983 rescue of major banks (ironically, by the same Reagan administration that trumpeted free markets). In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of "the great moderation" in 2004.
  • Washington stabilized the market with bailouts and by allowing certain companies to grow "too big to fail." Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • The foreign policy equivalent is to support the incumbent no matter what. And just as banks took wild risks thanks to Greenspan's implicit insurance policy, client governments such as Hosni Mubarak's in Egypt for years engaged in overt plunder thanks to similarly reliable U.S. support.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion.
  • In the realm of economics, price controls are designed to constrain volatility on the grounds that stable prices are a good thing. But although these controls might work in some rare situations, the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued. The risks of a dictatorship, no matter how seemingly stable, are no different, in the long run, from those of an artificially controlled price.
  • Such attempts to institutionally engineer the world come in two types: those that conform to the world as it is and those that attempt to reform the world. The nature of humans, quite reasonably, is to intervene in an effort to alter their world and the outcomes it produces. But government interventions are laden with unintended -- and unforeseen -- consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of "experts" in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile. Due to the complexity of markets, intricate regulations simply serve to generate fees for lawyers and profits for sophisticated derivatives traders who can build complicated financial products that skirt those regulations.
  • The life of a turkey before Thanksgiving is illustrative: the turkey is fed for 1,000 days and every day seems to confirm that the farmer cares for it -- until the last day, when confidence is maximal. The "turkey problem" occurs when a naive analysis of stability is derived from the absence of past variations. Likewise, confidence in stability was maximal at the onset of the financial crisis in 2007.
  • The turkey problem for humans is the result of mistaking one environment for another. Humans simultaneously inhabit two systems: the linear and the complex. The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as "tipping points." Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans' sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities. Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.
  • The system is responsible, not the components. But after the financial crisis of 2007-8, many people thought that predicting the subprime meltdown would have helped. It would not have, since it was a symptom of the crisis, not its underlying cause. Likewise, Obama's blaming "bad intelligence" for his administration's failure to predict the crisis in Egypt is symptomatic of both the misunderstanding of complex systems and the bad policies involved.
  • Obama's mistake illustrates the illusion of local causal chains -- that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved. As such, blaming the CIA is as foolish as funding it to forecast such events. Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.
  • Most explanations being offered for the current turmoil in the Middle East follow the "catalysts as causes" confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships. But Bahrain and Libya are countries with high gdps that can afford to import grain and other commodities. Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied -- what physicists call "percolation theory," in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
  • Humans fear randomness -- a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced fitness and increased chances of survival, it can have the reverse effect in today's complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of "great moderation." This is not to say that any and all volatility should be embraced. Insurance should not be banned, for example.
  • But alongside the "catalysts as causes" confusion sit two mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing). This leads to the desire to impose man-made solutions
  • Variation is information. When there is no variation, there is no information. This explains the CIA's failure to predict the Egyptian revolution and, a generation before, the Iranian Revolution -- in both cases, the revolutionaries themselves did not have a clear idea of their relative strength with respect to the regime they were hoping to topple. So rather than subsidize and praise as a "force for stability" every tin-pot potentate on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be in the open, as Italy and Lebanon both show in different ways, creates the stability of small jumps.
  • As Seneca wrote in De clementia, "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all . . . just as trees that have been trimmed throw out again countless branches." The imposition of peace through repeated punishment lies at the heart of many seemingly intractable conflicts, including the Israeli-Palestinian stalemate. Furthermore, dealing with seemingly reliable high-level officials rather than the people themselves prevents any peace treaty signed from being robust. The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty. Treaties that are negotiated with the consent of a broad swath of the populations on both sides of a conflict tend to survive. Just as no central bank is powerful enough to dictate stability, no superpower can be powerful enough to guarantee solid peace alone.
  • As Jean-Jacques Rousseau put it, "A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom." With freedom comes some unpredictable fluctuation. This is one of life's packages: there is no freedom without noise -- and no stability without volatility.∂
Weiye Loh

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
Weiye Loh

The Medium Is Not The Message: 3 Handwritten Newspapers | Brain Pickings - 0 views

  • Handwritten newspapers.
  • Since 1927, The Musalman has been quietly churning out its evening edition of four pages, all of which hand-written by Indian calligraphers in the shadow of the Wallajah Mosque in the city of Chennai. According to Wired, it might just be the last remaining hand-written newspaper in the world. It’s also India’s oldest daily newspaper in Urdu, the Hindustani language typically spoken by Muslims in South Asia. The Musalman: Preservation of a Dream is wonderful short film by Ishani K. Dutta, telling the story of the unusual publication and its writers’ dedication to the ancient art of Urdu calligraphy.

  • I mentioned a fascinating reversal of the-medium-is-the-message as one Japanese newspaper reverted to hand-written editions once the earthquake-and-tsunami disaster destroyed all power in the city of Ishinomaki in Miyagi Prefecture. For the next six days, the editors of the Ishinomaki Hibi Shimbun “printed” the daily newspaper’s disaster coverage the only way possible: By hand, in pen and paper. Using flashlights and marker pens, the reporters wrote the stories on poster-size paper and pinned the dailies to the entrance doors of relief centers around the city. Six staffers collected stories, which another three digested, spending an hour and a half per day composing the newspapers by hand.
  •  
    Minuscule literacy rates and prevailing poverty may not be conditions particularly conducive to publishing entrepreneurship, but they were no hindrance for Monrovia's The Daily Talk, a clever concept by Alfred Sirleaf that reaches thousands of Liberians every day by printing just once copy. That copy just happens to reside on a large blackboard on the side of one of the capital's busiest roads. Sirleaf started the project in 2000, at the peak of Liberia's civil war, but its cultural resonance and open access sustained it long after the war was over. To this day, he runs this remarkable one-man show as the editor, reporter, production manager, designer, fact-checker and publicist of The Daily Talk. For an added layer of thoughtfulness and sophistication, Sirleaf uses symbols to indicate specific topics for those who struggle to read. The common man in society can't afford a newspaper, can't afford to buy a generator to get on the internet - you know, power shortage - and people are caught up in a city where they have no access to information. And all of these things motivated me to come up with a kind of free media system for people to get informed." ~ Alfred Sirleaf
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
joanne ye

Democracy Project to Fill Gap in Online Politics - 3 views

Reference: Democracy Project to Fill Gap in Online Politics (2000, June 8). PR Newswire. Retrieved 23 September, 2009, from Factiva. (Article can be found at bottom of the post) Summary: The D...

human rights digital freedom democracy

started by joanne ye on 24 Sep 09 no follow-up yet
Weiye Loh

Roger Pielke Jr.'s Blog: Wanted: Less Spin, More Informed Debate - 0 views

  • , the rejection of proposals that suggest starting with a low carbon price is thus a pretty good guarantee against any carbon pricing at all.  It is rather remarkable to see advocates for climate action arguing against a policy that recommends implementing a carbon price, simply because it does not start high enough for their tastes.  For some, idealism trumps pragmatism, even if it means no action at all.
  • Ward writes: . . . climate change is the result of a number of market failures, the largest of which arises from the fact that the prices of products and services involving emissions of greenhouse gases do not reflect the true costs of the damage caused through impacts on the climate. . . All serious economic analyses of how to tackle climate change identify the need to correct this market failure through a carbon price, which can be implemented, for instance, through cap and trade schemes or carbon taxes. . . A carbon price can be usefully supplemented by improvements in innovation policies, but it needs to be at the core of action on climate change, as this paper by Carolyn Fischer and Richard Newell points out.
  • First, the criticism is off target. A low and rising carbon price is in fact a central element to the policy recommendations advanced by the Hartwell Group in Climate Pragmatism, the Hartwell Paper, and as well, in my book The Climate Fix.  In Climate Pragmatism, we approvingly cite Japan's low-but-rising fossil fuels tax and discuss a range of possible fees or taxes on fossil fuels, implemented, not to penalize energy use or price fossil fuels out of the market, but rather to ensure that as we benefit from today’s energy resources we are setting aside the funds necessary to accelerate energy innovation and secure the nation’s energy future.
  • ...3 more annotations...
  • Here is another debating lesson -- before engaging in public not only should one read the materials that they are critiquing, they should also read the materials that they cite in support of their own arguments. This is not the first time that Bob Ward has put out misleading information related to my work.  Ever since we debated in public at the Royal Institution, Bob has adopted guerrilla tactics, lobbing nonsense into the public arena and then hiding when challenged to support or defend his views.  As readers here know, I am all for open and respectful debate over these important topics.  Why is that instead, all we get is poorly informed misdirection and spin? Despite the attempts at spin, I'd welcome Bob's informed engagement on this topic. Perhaps he might start by explaining which of the 10 statements that I put up on the mathematics and logic underlying climate pragmatism is incorrect.
  • In comments to another blog, I've identified Bob as a PR flack. I see no reason to change that assessment. In fact, his actions only confirm it. Where does he fit into a scientific debate?
  • Thanks for the comment, but I'll take the other side ;-)First, this is a policy debate that involves various scientific, economic, political analyses coupled with various values commitments including monied interests -- and as such, PR guys are as welcome as anyone else.That said, the problem here is not that Ward is a PR guy, but that he is trying to make his case via spin and misrepresentation. That gets noticed pretty quickly by anyone paying attention and is easily shot down.
1 - 20 of 67 Next › Last »
Showing 20 items per page