Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Building" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
4More

Roger Pielke Jr.'s Blog: Every Relatively Affluent White Guy for Himself - 0 views

  • one of the big arguments that environmentalists have used about the need to stop climate change is that those who will suffer most are the little brown poor people in far-off lands who will, for instance, experience increased incidence of malaria and exposure to floods and other disasters. (Of course the fact that they are already burdened by such things in huge disproportion to the privileged minority doesn’t seem to enter into the argument).
  • But I raise this point because when it comes to climate survivalism, the little brown folks are nowhere to be seen, and apparently it’s every relatively affluent white guy (and his nuclear family, of course) for himself.
  • Dan Sarewitz takes the Washington Post to task for publishing a bizarre commentary on the coming climate apocalypse: Check out the article by a climate survivalist from the February 27, 2011 Washington Post. (I’m going to go out on a limb and treat the article as if it’s not a satire or hoax, but maybe the joke’s on me.) The author describes how he’s buying solar panels and generators and laying in food and supplies and putting extra locks on his doors and windows in anticipation of the coming climate apocalypse, much in the way that in the 1960s certain nuts were digging shelters in their backyard to provide protection against hydrogen bombs, and in the ‘80s (and probably to this day) right-wing crazies were building up small arsenals to protect themselves against the time when the government tried to take away their right to be bigots.
  • ...1 more annotation...
  • fear of the coming apocalypse seems to be an honorable tradition among some factions of the human race, and besides in this case it’s probably good for the beleaguered economy that this guy is spending what must be lots of money on hardware, both high-tech and low. But there are some elements of climate survivalism that are truly troubling. The fact that the Washington Post chose to put this article on the front page of its Sunday opinion section is an editorial judgment that the author, who is executive director of the Chesapeake Climate Action Committee, is someone whose perspective deserves to be taken seriously.
8More

The Breakthrough Institute: ANALYSIS: Nuclear Moratorium in Germany Could Cause Spike i... - 0 views

  • The German government announced today that it will shut down seven of the country's seventeen nuclear power plants for an indefinite period, a decision taken in response to widespread protests and a German public increasingly fearful of nuclear power after a nuclear emergency in Japan. The decision places a moratorium on a law that would extend the lifespan of these plants, and is uncharacteristic of Angela Merkel, whose government previously overturned its predecessor's decision to phase nuclear out of Germany's energy supply.
  • The seven plants, each built before 1980, represent 30% of Germany's nuclear electricity generation and 24% of its gross installed nuclear capacity. Shutting down these plants, or even just placing an indefinite hold on their operation, would be a major loss of zero-emissions generation capacity for Germany. The country currently relies on nuclear power from its seventeen nuclear power plants for about a quarter of its electricity supply.
  • The long-term closure of these plants would therefore seriously challenge Germany's carbon emissions efforts, as they try to meet the goal of 40% reduction below 1990 carbon emissions rates by 2020.
  • ...4 more annotations...
  • if lost generation were made up for entirely by coal-fired plants, carbon emissions would increase annually by as much as 33 million tons. This would represent an overall 4% annual increase in carbon emissions for the country and an 8% increase in carbon emissions for the power sector alone.
  • The moratorium could cause a spike in CO2 emissions as Germany turns to its other, more carbon-intensive sources to supply its energy demand. Already, the country has been engaged in a "dash for coal", building dozens of new coal plants in response to the perverse incentives and intense lobbying from the coal industries made possible by the European Emissions Trading Scheme. (As previously reported by Breakthrough Europe).
  • Alternatively, should the country try to replace lost generation entirely with power from renewables, it would need to more than double generation of renewable energy, from where it currently stands at 97 billion kWh to about 237 billion kWh. As part of the country's low-carbon strategy, Germany has planned to deploy at least 20% renewable energy sources by 2020. If the nation now chooses to meet this goal by displacing nuclear plants, 2020 emissions levels would be higher than had the country otherwise phased out its carbon-intensive coal or natural gas plants.
  • *Carbon emissions factors used are those estimated by the World Bank in 2009 for new coal-fired power plants (0.795 t C02/MWh) and new gas-fired power plants (0.398 t C02/MWh) **Data from Carbon Monitoring For Action, European Nuclear Society Data, and US Energy Information Administration
  •  
    Carbon dioxide emissions in Germany may increase by 4 percent annually in response to a moratorium on seven of the country's oldest nuclear power plants, as power generation is shifted from nuclear power, a zero carbon source, to the other carbon-intensive energy sources that currently make up the country's energy supply.
5More

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • Internet-based news websites and the growing popularity of social media have broken the mainstream media's monopoly on news - though not completely. Singapore's PAP-led government was one of the first in the world to devise content regulations for the Internet, issuing restrictions on topics it deemed as sensitive as early as 1996.
  • While political parties are broadly allowed to use the Internet to campaign, they were previously prohibited from employing some of the medium's most powerful features, including live audio and video streaming and so-called "viral marketing". Websites not belonging to political parties or candidates but registered as political sites have been banned from activities that could be considered online electioneering.
  • George argued that despite the growing influence of online media, it would be naive to conclude that the PAP's days of domination are numbered. "While the government appears increasingly liberal towards individual self-expression, it continues to intervene strategically at points at which such expression may become politically threatening," he said. "It is safe to assume that the government's digital surveillance capabilities far outstrip even its most technologically competent opponent's evasive abilities."
  • ...2 more annotations...
  • consistent with George's analysis, authorities last week relaxed past regulations that limited the use of the Internet and social media for election campaigning. Political parties and candidates will be allowed to use a broader range of new media platforms, including blogs, micro-blogs, online photo-sharing platforms, social networking sites and electronic media applications used on mobile phones, for election advertising. The loosening, however, only applies for political party-run websites, chat rooms and online discussion forums. Candidates must declare the new media content they intend to use within 12 hours after the start of the election campaign period. George warned in a recent blog entry that the new declaration requirements could open the way for PAP-led defamation suits against new media using opposition politicians. PAP leaders have historically relied on expensive litigation to suppress opposition and media criticism. "The PAP won't subject everyone's postings to legal scrutiny. But if it decides that a particular opposition politician needs to be utterly demolished, you can bet that no tweet of his would be too tiny, no Facebook update too fleeting ... in order a build the case against the individual," George warned in a journalism blog.
  • While opposition politicians will rely more on new than mainstream media to communicate with voters, they already recognize that the use of social media will not necessarily translate into votes. "[Online support] can give a too rosy a picture and false degree of comfort," said the RP's Jeyaretnam. "People who [interact with] us online are those who are already convinced with our messages anyway."
5More

Hunch Blog | Blog Archive | You've got mail: What your email domain says about you - 0 views

  • AOL users are most likely to be overweight women ages 35-64 who have a high school diploma and are spiritual, but not religious. They tend to be politically middle of the road, in a relationship of 10+ years, and have children. AOL users live in the suburbs and haven’t traveled outside their own country. Family is their first priority. AOL users mostly read magazines, have a desktop computer, listen to the radio, and watch TV on 1-3 DVRs in their home. At home, they lounge around in sweats. AOL users are optimistic extroverts who prefer sweet snacks and like working on a team.
  • Gmail users are most likely to be thin young men ages 18-34 who are college-educated and not religious. Like other young Hunch users, they tend to be politically liberal, single (and ready to mingle), and childless. Gmail users live in cities and have traveled to five or more countries. They’re career-focused and plugged in — they mostly read blogs, have an iPhone and laptop, and listen to music via MP3s and computers (but they don’t have a DVR). At home, they lounge around in a t-shirt and jeans. Gmail users prefer salty snacks and are introverted and entrepreneurial. They are optimistic or pessimistic, depending on the situation.
  • Hotmail users are most likely to be young women of average build ages 18-34 (and younger) who have a high school diploma and are not religious. They tend to be politically middle of the road, single, and childless. Hotmail users live in the suburbs, perhaps still with their parents, and have traveled to up to five countries. They mostly read magazines and contemporary fiction, have a laptop, and listen to music via MP3s and computers (but they don’t have a DVR). At home, Hotmail users lounge around in a t-shirt and jeans. They’re introverts who prefer sweet snacks and like working on a team. They consider themselves more pessimistic, but sometimes it depends on the situation.
  • ...1 more annotation...
  • Yahoo! users are most likely to be overweight women ages 18-49 who have a high school diploma and are spiritual, but not religious. They tend to be politically middle of the road, in a relationship of 1-5 years, and have children. Yahoo! users live in the suburbs or in rural areas and haven’t traveled outside their own country. Family is their first priority. They mostly read magazines, are almost equally likely to have a laptop or desktop computer, listen to the radio and cds, and watch TV on 1-2 DVRs in their home. At home, Yahoo! users lounge around in pajamas. They’re extroverts who prefer sweet snacks and like working on a team. Yahoo! users are optimistic or pessimistic, depending on the situation.
  •  
    What your email domain says about you
11More

Sunita Narain: Indian scientists: missing in action - 0 views

  • Since then there has been dead silence among the powerful scientific leaders of the country, with one exception. Kiran Karnik, a former employee of ISRO and board member of Deva
  • when the scientists who understand the issue are not prepared to engage with the public, there can be little informed discussion. The cynical public, which sees scams tumble out each day, believes easily that everybody is a crook. But, as I said, the country’s top scientists have withdrawn further into their comfort holes, their opinion frozen in contempt that Indian society is scientifically illiterate. I can assure you in the future there will be even less conversation between scientists and all of us in the public sphere.
  • This is not good. Science is about everyday policy. It needs to be understood and for this it needs to be discussed and deliberated openly and strenuously. But how will this happen if one side — the one with information, knowledge and power — will not engage in public discourse?
  • ...8 more annotations...
  • I suspect Indian scientists have retired hurt to the pavilion. They were exposed to some nasty public scrutiny on a deal made by a premier science research establishment, Indian Space Research Organisation (ISRO), with Devas, a private company, on the allocation of spectrum. The public verdict was that the arrangement was a scandal; public resources had been given away for a song. The government, already scam-bruised, hastily scrapped the contract.
  • Take the issue of genetically-modified (GM) crops. For long this matter has been decided inside closed-door committee rooms, where scientists are comforted by the fact that their decisions will not be challenged. Their defence is “sound science” and “superior knowledge”. It is interesting that the same scientists will accept data produced by private companies pushing the product. Issues of conflict of interest will be brushed aside as integrity cannot be questioned behind closed doors. Silence is the best insurance. This is what happened inside a stuffy committee room, where scientists sat to give permission to Mahyco-Monsanto to grow genetically-modified brinjal.
  • This case involved a vegetable we all eat. This was a matter of science we had the right to know about and to decide upon. The issue made headlines. The reaction of the scientific fraternity was predictable and obnoxious. They resented the questions. They did not want a public debate.
  • As the controversy raged and more people got involved, the scientists ran for cover. They wanted none of this messy street fight. They were meant to advise prime ministers and the likes, not to answer simple questions of people. Finally, when environment minister Jairam Ramesh took the decision on the side of the ordinary vegetable eater, unconvinced by the validity of the scientific data to justify no-harm, scientists were missing in their public reactions. Instead, they whispered about lack of “sound science” in the decision inside committees.
  • The matter did not end there. The minister commissioned an inter-academy inquiry — six top scientific institutions looked into GM crops and Bt-brinjal — expecting a rigorous examination of the technical issues and data gaps. The report released by this committee was shoddy to say the least. It contained no references or attributions and not a single citation. It made sweeping statements and lifted passages from a government newsletter and even from global biotech industry. The report was thrashed. Scientists again withdrew into offended silence.
  • The final report of this apex-science group is marginally better in that it includes citations but it reeks of scientific arrogance cloaked in jargon. The committee did not find it fit to review the matter, which had reached public scrutiny. The report is only a cover for their established opinion about the ‘truth’ of Bt-brinjal. Science for them is certainly not a matter of enquiry, critique or even dissent.
  • the world has changed. No longer is this report meant only for top political and policy leaders, who would be overwhelmed by the weight of the matter, the language and the expert knowledge of the writer. The report will be subjected to public scrutiny. Its lack of rigour will be deliberated, its unquestioned assertion challenged.
  • This is the difference between the manufactured comfortable world of science behind closed doors and the noisy messy world outside. It is clear to me that Indian scientists need confidence to creatively engage in public concerns. The task to build scientific literacy will not happen without their engagement and their tolerance for dissent. The role of science in Indian democracy is being revisited with a new intensity. The only problem is that the key players are missing in action.
12More

TODAYonline | Commentary | Science, shaken, must take stock - 0 views

  • Japan's part-natural, part-human disaster is an extraordinary event. As well as dealing with the consequences of an earthquake and tsunami, rescuers are having to evacuate thousands of people from the danger zone around Fukushima. In addition, the country is blighted by blackouts from the shutting of 10 or more nuclear plants. It is a textbook case of how technology can increase our vulnerability through unintended side-effects.
  • Yet there had been early warnings from scientists. In 2006, Professor Katsuhiko Ishibashi resigned from a Japanese nuclear power advisory panel, saying the policy of building in earthquake zones could lead to catastrophe, and that design standards for proofing them against damage were too lax. Further back, the seminal study of accidents in complex technologies was Professor Charles Perrow's Normal Accidents, published in 1984
  • Things can go wrong with design, equipment, procedures, operators, supplies and the environment. Occasionally two or more will have problems simultaneously; in a complex technology such as a nuclear plant, the potential for this is ever-present.
  • ...9 more annotations...
  • in complex systems, "no matter how effective conventional safety devices are, there is a form of accident that is inevitable" - hence the term "normal accidents".
  • system accidents occur with many technologies: Take the example of a highway blow-out leading to a pile-up. This may have disastrous consequences for those involved but cannot be described as a disaster. The latter only happens when the technologies involved have the potential to affect many innocent bystanders. This "dread factor" is why the nuclear aspect of Japan's ordeal has come to dominate headlines, even though the tsunami has had much greater immediate impact on lives.
  • It is simply too early to say what precisely went wrong at Fukushima, and it has been surprising to see commentators speak with such speed and certainty. Most people accept that they will only ever have a rough understanding of the facts. But they instinctively ask if they can trust those in charge and wonder why governments support particular technologies so strongly.
  • Industry and governments need to be more straightforward with the public. The pretence of knowledge is deeply unscientific; a more humble approach where officials are frank about the unknowns would paradoxically engender greater trust.
  • Likewise, nuclear's opponents need to adopt a measured approach. We need a fuller democratic debate about the choices we are making. Catastrophic potential needs to be a central criterion in decisions about technology. Advice from experts is useful but the most significant questions are ethical in character.
  • If technologies can potentially have disastrous effects on large numbers of innocent bystanders, someone needs to represent their interests. We might expect this to be the role of governments, yet they have generally become advocates of nuclear power because it is a relatively low-carbon technology that reduces reliance on fossil fuels. Unfortunately, this commitment seems to have reduced their ability to be seen to act as honest brokers, something acutely felt at times like these, especially since there have been repeated scandals in Japan over the covering-up of information relating to accidents at reactors.
  • Post Fukushima, governments in Germany, Switzerland and Austria already appear to be shifting their policies. Rational voices, such as the Britain's chief scientific adviser John Beddington, are saying quite logically that we should not compare the events in Japan with the situation in Britain, which does not have the same earthquake risk. Unfortunately, such arguments are unlikely to prevail in the politics of risky technologies.
  • firms and investors involved in nuclear power have often failed to take regulatory and political risk into account; history shows that nuclear accidents can lead to tighter regulations, which in turn can increase nuclear costs. Further ahead, the proponents of hazardous technologies need to bear the full costs of their products, including insurance liabilities and the cost of independent monitoring of environmental and health effects. As it currently stands, taxpayers would pay for any future nuclear incident.
  • Critics of technology are often dubbed in policy circles as anti-science. Yet critical thinking is central to any rational decision-making process - it is less scientific to support a technology uncritically. Accidents happen with all technologies, and are regrettable but not disastrous so long as the technology does not have catastrophic potential; this raises significant questions about whether we want to adopt technologies that do have such potential.
2More

Roger Pielke Jr.'s Blog: The Guardian on Difficult Energy Choices - 0 views

  • For all the emotive force of events in Japan, though, this is one issue where there is a pressing need to listen to what our heads say about the needs of the future, as opposed to subjecting ourselves to jittery whims of the heart. One of the few solid lessons to emerge from the aged Fukushima plant is that the tendency in Britain and elsewhere to postpone politically painful choices about building new nuclear stations by extending the life-spans of existing ones is dangerous. Beyond that, with or without Fukushima, the undisputed nastiness of nuclear – the costs, the risks and the waste – still need to be carefully weighed in the balance against the different poisons pumped out by coal, which remains the chief economic alternative. Most of the easy third ways are illusions. Energy efficiency has been improving for over 200 years, but it has worked to increase not curb demand. Off-shore wind remains so costly that market forces would simply push pollution overseas if it were taken up in a big way. A massive expansion of shale gas may yet pave the way to a plausible non-nuclear future, and it certainly warrants close examination. The fundamentals of the difficult decisions ahead, however, have not moved with the Earth.
  •  
    The Guardian hits the right note on energy policy choices in the aftermath of the still unfolding Japanese nuclear crisis:
32More

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
9More

Why a hyper-personalized Web is bad for you - Internet - Insight - ZDNet Asia - 0 views

  • Invisibly but quickly, the Internet is changing. Sites like Google and Facebook show you what they think you want to see, based on data they've collected about you.
  • The filter bubble is the invisible, personal universe of information that results--a bubble you live in, and you don't even know it. And it means that the world you see online and the world I see may be very different.
  • As consumers, we can vary our information pathways more and use things like incognito browsing to stop some of the tracking that leads to personalization.
  • ...6 more annotations...
  • it's in these companies' hands to do this ethically--to build algorithms that show us what we need to know and what we don't know, not just what we like.
  • why would the Googles and Facebooks of the world change what they're doing (absent government regulation)? My hope is that, like newspapers, they'll move from a pure profit-making posture to one that recognizes that they're keepers of the public trust.
  • most people don't know how Google and Facebook are controlling their information flows. And once they do, most people I've met want to have more control and transparency than these companies currently offer. So it's a way in to that conversation. First people have to know how the Internet is being edited for them.
  • what's good and bad about the personalization. Tell me some ways that this is not a good thing? Here's a few. 1) It's a distorted view of the world. Hearing your own views and ideas reflected back is comfortable, but it can lead to really bad decisions--you need to see the whole picture to make good decisions; 2) It can limit creativity and innovation, which often come about when two relatively unrelated concepts or ideas are juxtaposed; and 3) It's not great for democracy, because democracy requires a common sense of the big problems that face us and an ability to put ourselves in other peoples' shoes.
  • Stanford researchers Dean Eckles and Maurits Kapstein, who figured out that not only do people have personal tastes, they have personal "persuasion profiles". So I might respond more to appeals to authority (Barack Obama says buy this book), and you might respond more to scarcity ("only 2 left!"). In theory, if a site like Amazon could identify your persuasion profile, it could sell it to other sites--so that everywhere you go, people are using your psychological weak spots to get you to do stuff. I also really enjoyed talking to the guys behind OKCupid, who take the logic of Google and apply it to dating.
  • Nobody noticed when Google went all-in on personalization, because the filtering is very hard to see.
4More

In Japan, a Culture That Promotes Nuclear Dependency - NYTimes.com - 0 views

  • look no further than the Fukada Sports Park, which serves the 7,500 mostly older residents here with a baseball diamond, lighted tennis courts, a soccer field and a $35 million gymnasium with indoor pool and Olympic-size volleyball arena. The gym is just one of several big public works projects paid for with the hundreds of millions of dollars this community is receiving for acce
  • the aid has enriched rural communities that were rapidly losing jobs and people to the cities. With no substantial reserves of oil or coal, Japan relies on nuclear power for the energy needed to drive its economic machine. But critics contend that the largess has also made communities dependent on central government spending — and thus unwilling to rock the boat by pushing for robust safety measures.
  • Tsuneyoshi Adachi, a 63-year-old fisherman, joined the huge protests in the 1970s and 1980s against the plant’s No. 2 reactor. He said many fishermen were angry then because chlorine from the pumps of the plant’s No. 1 reactor, which began operating in 1974, was killing seaweed and fish in local fishing grounds. However, Mr. Adachi said, once compensation payments from the No. 2 reactor began to flow in, neighbors began to give him cold looks and then ignore him. By the time the No. 3 reactor was proposed in the early 1990s, no one, including Mr. Adachi, was willing to speak out against the plant. He said that there was the same peer pressure even after the accident at Fukushima, which scared many here because they live within a few miles of the Shimane plant. “Sure, we are all worried in our hearts about whether the same disaster could happen at the Shimane nuclear plant,” Mr. Adachi said. However, “the town knows it can no longer survive economically without the nuclear plant.”
  • ...1 more annotation...
  • Much of this flow of cash was the product of the Three Power Source Development Laws, a sophisticated system of government subsidies created in 1974 by Kakuei Tanaka, the powerful prime minister who shaped Japan’s nuclear power landscape and used big public works projects to build postwar Japan’s most formidable political machine. The law required all Japanese power consumers to pay, as part of their utility bills, a tax that was funneled to communities with nuclear plants. Officials at the Ministry of Economy, Trade and Industry, which regulates the nuclear industry, and oversees the subsidies, refused to specify how much communities have come to rely on those subsidies. “This is money to promote the locality’s acceptance of a nuclear plant,” said Tatsumi Nakano of the ministry’s Agency for Natural Resources and Energy.
1More

YouTube - X-Men: Science Can Build Them, But Is It Ethical? - 0 views

  •  
    Is science capable of enhancing human abilities to the extent of the powers of the X-Men? What are the ethical implications if this is possible? Asa Griggs Candler Professor of Bioethics Paul Root Wolpe explores these questions raised by X-Men: First Class.
11More

Jonathan Stray » Measuring and improving accuracy in journalism - 0 views

  • Accuracy is a hard thing to measure because it’s a hard thing to define. There are subjective and objective errors, and no standard way of determining whether a reported fact is true or false
  • The last big study of mainstream reporting accuracy found errors (defined below) in 59% of 4,800 stories across 14 metro newspapers. This level of inaccuracy — where about one in every two articles contains an error — has persisted for as long as news accuracy has been studied, over seven decades now.
  • With the explosion of available information, more than ever it’s time to get serious about accuracy, about knowing which sources can be trusted. Fortunately, there are emerging techniques that might help us to measure media accuracy cheaply, and then increase it.
  • ...7 more annotations...
  • We could continuously sample a news source’s output to produce ongoing accuracy estimates, and build social software to help the audience report and filter errors. Meticulously applied, this approach would give a measure of the accuracy of each information source, and a measure of the efficiency of their corrections process (currently only about 3% of all errors are corrected.)
  • Real world reporting isn’t always clearly “right” or “wrong,” so it will often be hard to decide whether something is an error or not. But we’re not going for ultimate Truth here,  just a general way of measuring some important aspect of the idea we call “accuracy.” In practice it’s important that the error counting method is simple, clear and repeatable, so that you can compare error rates of different times and sources.
  • Subjective errors, though by definition involving judgment, should not be dismissed as merely differences in opinion. Sources found such errors to be about as common as factual errors and often more egregious [as rated by the sources.] But subjective errors are a very complex category
  • One of the major problems with previous news accuracy metrics is the effort and time required to produce them. In short, existing accuracy measurement methods are expensive and slow. I’ve been wondering if we can do better, and a simple idea comes to mind: sampling. The core idea is this: news sources could take an ongoing random sample of their output and check it for accuracy — a fact check spot check
  • Standard statistical theory tells us what the error on that estimate will be for any given number of samples (If I’ve got this right, the relevant formula is standard error of a population proportion estimate without replacement.) At a sample rate of a few stories per day, daily estimates of error rate won’t be worth much. But weekly and monthly aggregates will start to produce useful accuracy estimates
  • the first step would be admitting how inaccurate journalism has historically been. Then we have to come up with standardized accuracy evaluation procedures, in pursuit of metrics that capture enough of what we mean by “true” to be worth optimizing. Meanwhile, we can ramp up the efficiency of our online corrections processes until we find as many useful, legitimate errors as possible with as little staff time as possible. It might also be possible do data mining on types of errors and types of stories to figure out if there are patterns in how an organization fails to get facts right.
  • I’d love to live in a world where I could compare the accuracy of information sources, where errors got found and fixed with crowd-sourced ease, and where news organizations weren’t shy about telling me what they did and did not know. Basic factual accuracy is far from the only measure of good journalism, but perhaps it’s an improvement over the current sad state of affairs
  •  
    Professional journalism is supposed to be "factual," "accurate," or just plain true. Is it? Has news accuracy been getting better or worse in the last decade? How does it vary between news organizations, and how do other information sources rate? Is professional journalism more or less accurate than everything else on the internet? These all seem like important questions, so I've been poking around, trying to figure out what we know and don't know about the accuracy of our news sources. Meanwhile, the online news corrections process continues to evolve, which gives us hope that the news will become more accurate in the future.
5More

Google's in-house philosopher: Technologists need a "moral operating system" | VentureBeat - 0 views

  • technology-makers aren’t supposed to think about the morality of their products — they just build stuff and let other people worry about the ethics. But Horowitz pointed to the Manhattan Project, where physicists developed the nuclear bomb, as an obvious example where technologists should have thought carefully about the moral dimensions of their work. To put it another way, he argued that technology makers should be thinking as much about their “moral operating system” as their mobile operating system.
  • most of the evil in the world comes not from bad intentions, but rather from “not thinking.”
  • “Ethics is hard,” Horowitz said. “Ethics requires thinking.”
  • ...1 more annotation...
  • try to articulate how they decided what was right and wrong. “That’s the first step towards taking responsibility towards what we should do with all of our power,” Horowitz said, later adding, “We have so much power today. It is up to us to figure out what to do.”
  •  
    To illustrate how ethics are getting short-shrift in the tech world, Horowitz asked attendees whether they prefer the iPhone or Android. (When the majority voted for the iPhone, he joked that they were "suckers" who just chose the prettier device.) Then he asked whether it was a good idea to take data from an audience member's phone in order to provide various (and mostly beneficial) services, or whether he should be left alone, and the majority of audience voted to leave him alone. Finally, Horowitz wanted to know whether audience members would use the ideas proposed by John Stuart Mill or by Immanuel Kant to make that decision. Not surprisingly, barely anyone knew what he was talking about. "That's a terrifying result," Horowitz said. "We have stronger opinions about our handheld devices than about the moral framework we should use to guide our decisions."
4More

Royal Society launches study on openness in science | Royal Society - 0 views

  • Science as a public enterprise: opening up scientific information will look at how scientific information should best be managed to improve the quality of research and build public trust.
  • “Science has always been about open debate. But incidents such as the UEA email leaks have prompted the Royal Society to look at how open science really is.  With the advent of the Internet, the public now expect a greater degree of transparency. The impact of science on people’s lives, and the implications of scientific assessments for society and the economy are now so great that  people won’t just believe scientists when they say “trust me, I’m an expert.” It is not just scientists who want to be able to see inside scientific datasets, to see how robust they are and ask difficult questions about their implications. Science has to adapt.”
  • The study will look at questions such as: What are the benefits and risks of openly sharing scientific data? How does the rise of the blogosphere change scientific research? What responsibility should scientists, their institutions and the funders of research have for open data? How do we make information more accessible and who will pay to do it? Should privately funded scientists be held to the same standards as those who are publicly funded? How do we balance openness against intellectual property rights and in the case of medical information how do protect patient confidentiality?  Will the same rules apply to scientists across the world?
  • ...1 more annotation...
  • “Different scientific disciplines share their information very differently.  The human genome project was incredibly open in how data were shared. But in biomedical science you also have drug trials conducted where no results are made public.” 
3More

Biomimicry: How Scientists Emulate Nature to Create Sustainable Designs | The Utopianis... - 0 views

  • “The core idea is that nature, imaginative by necessity, has already solved many of the problems we are grappling with. Animals, plants, and microbes are the consummate engineers. They have found what works, what is appropriate, and most important, what lasts here on Earth. This is the real news of biomimicry: After 3.8 billion years of research and development, failures are fossils, and what surrounds us is the secret to survival. Like the viceroy butterfly imitating the monarch, we humans are imitating the best adapted organisms in our habitat. We are learning, for instance, how to harness energy like a leaf, grow food like a prairie, build ceramics like an abalone, self-medicate like a chimp, create color like a peacock, compute like a cell, and run a business like a hickory forest.”
  • A more recent example of biomimetics in action is a biological laser created by two physicists at Harvard Medical School. Malte Gather and Seok Hyun Yun placed a single cell, genetically engineered to produce green fluorescent proteins originally found in jellyfish, into a cavity with two parallel mirrors on either side. When they exposed the cell to pulses of light, it emitted green fluorescent light that focused into a laser beam with the aid of the parallel mirrors. As Gather and Yun pointed out in their paper, the single-cell biological laser avoids the use of “artificial or engineered optical gain materials, such as doped crystals, semiconductors, synthetic dyes and purified gases.”
  •  
    if one of our goals as a species is longevity, we may want to humble ourselves and take a look at how other species manage to live symbiotically with the earth instead of just on it. Biomimetics, or biomimicry, does just that.
4More

journalism.sg » PM's National Day Rally calls for more rational online spaces - 0 views

  • Privately, several independent bloggers have voiced unease at the ugly mob behaviour that swamped cyberspace during the general election. The experience has sparked internal discussions about how best to manage readers’ comments, in particular, since that’s where irrationality has run riot.
  • There are also established bloggers who are no longer content to speak among the converted. They want to widen the online debate so that it does not attract only anti-government voices. (I've made a similar point in an earlier piece.) Don’t be surprised, therefore, if you see some of Singapore’s influential independent bloggers creating new platforms for national debate in the coming months, either by developing new websites or by reorienting their existing ones.
  • But even if they build them, will government sympathisers and spokesmen come? One thing that will have to change is the PAP’s politics of intolerance, which has contributed to the polarisation of debate in Singapore. Its “with-us-or-against-us” philosophy has compelled establishment types to stay clear of plural spaces. (The classic illustration was the PAP’s refusal to take part in The Online Citizen’s multi-party forum before the general election.)
  • ...1 more annotation...
  • The typical establishment individual would refuse to contribute an article to an independent medium that is known to carry opposition party voices, for example. In Singapore’s political culture, it is assumed that any such medium would be blacklisted by people at the top, and that anyone who cooperates with it will be guilty by association. Or, perhaps it is simply that most establishment types lack the confidence to engage in debate on a truly level playing field.

Pergolas for Quality Home Improvement - 1 views

started by Pergolas Adelaide on 04 Oct 11 no follow-up yet

IBM to Apply Analytics to War on Terror - 1 views

started by Weiye Loh on 14 Oct 09 no follow-up yet

How to Build A Dinosaur - 3 views

started by guanyou chen on 22 Oct 09 no follow-up yet
13More

BioCentre - 0 views

  • Humanity’s End. The main premise of the book is that proposals that would supposedly promise to make us smarter like never before or add thousands of years to our live seem rather far fetched and the domain of mere fantasy. However, it is these very proposals which form the basis of many of the ideas and thoughts presented by advocates of radical enhancement and which are beginning to move from the sidelines to the centre of main stream discussion. A variety of technologies and therapies are being presented to us as options to expand our capabilities and capacities in order for us to become something other than human.
  • Agar takes issue with this and argues against radical human enhancement. He structures his analysis and discussion by focusing on four key figures and their proposals which help to form the core of the case for radical enhancement debate.  First to be examined by Agar is Ray Kurzweil who argues that Man and Machine will become one as technology allows us to transcend our biology. Second, is Aubrey de Grey who is a passionate advocate and pioneer of anti-ageing therapies which allow us to achieve “longevity escape velocity”. Next is Nick Bostrom, a leading transhumanist who defends the morality and rationality of enhancement and finally James Hughes who is a keen advocate of a harmonious democracy of the enhanced and un-enhanced.
  • He avoids falling into any of the pitfalls of basing his argument solely upon the “playing God” question but instead seeks to posit a well founded argument in favour of the precautionary principle.
  • ...10 more annotations...
  • Agar directly tackles Hughes’ ideas of a “democratic transhumanism.” Here as post-humans and humans live shoulder to shoulder in wonderful harmony, all persons have access to the technologies they want in order to promote their own flourishing.  Under girding all of this is the belief that no human should feel pressurised to become enhance. Agar finds no comfort with this and instead can foresee a situation where it would be very difficult for humans to ‘choose’ to remain human.  The pressure to radically enhance would be considerable given the fact that the radically enhanced would no doubt be occupying the positions of power in society and would consider the moral obligation to utilise to the full enhancement techniques as being a moral imperative for the good of society.  For those who were able to withstand then a new underclass would no doubt emerge between the enhanced and the un-enhanced. This is precisely the kind of society which Hughes appears to be overly optimistic will not emerge but which is more akin to Lee Silver’s prediction of the future with the distinction made between the "GenRich" and the "naturals”.  This being the case, the author proposes that we have two options: radical enhancement is either enforced across the board or banned outright. It is the latter option which Agar favours but crucially does not elaborate further on so it is unclear as to how he would attempt such a ban given the complexity of the issue. This is disappointing as any general initial reflections which the author felt able to offer would have added to the discussion and added further strength to his line of argument.
  • A Transhuman Manifesto The final focus for Agar is James Hughes, who published his transhumanist manifesto Citizen Cyborg in 2004. Given the direct connection with politics and public policy this for me was a particularly interesting read. The basic premise to Hughes argument is that once humans and post humans recognise each other as citizens then this will mark the point at which they will be able to get along with each other.
  • Agar takes to task the argument Bostrom made with Toby Ord, concerning claims against enhancement. Bostrom and Ord argue that it boils down to a preference for the status quo; current human intellects and life spans are preferred and deemed best because they are what we have now and what we are familiar with (p. 134).  Agar discusses the fact that in his view, Bostrom falls into a focalism – focusing on and magnifying the positives whilst ignoring the negative implications.  Moreover, Agar goes onto develop and reiterate his earlier point that the sort of radical enhancements Bostrom et al enthusiastically support and promote take us beyond what is human so they are no longer human. It therefore cannot be said to be human enhancement given the fact that the traits or capacities that such enhancement afford us would be in many respects superior to ours, but they would not be ours.
  • With his law of accelerating returns and talk of the Singularity Ray Kurzweil proposes that we are speeding towards a time when our outdated systems of neurons and synapses will be traded for far more efficient electronic circuits, allowing us to become artificially super-intelligent and transferring our minds from brains into machines.
  • Having laid out the main ideas and thinking behind Kurzweil’s proposals, Agar makes the perceptive comment that despite the apparent appeal of greater processing power it would nevertheless be no longer human. Introducing chips to the human body and linking into the human nervous system to computers as per Ray Kurzweil’s proposals will prove interesting but it goes beyond merely creating a copy of us in order to that future replication and uploading can take place. Rather it will constitute something more akin to an upgrade. Electrochemical signals that the brain use to achieve thought travel at 100 metres per second. This is impressive but contrast this with the electrical signals in a computer which travel at 300 million metres per second then the distinction is clear. If the predictions are true how will such radically enhanced and empowered beings live not only the unenhanced but also what will there quality of life really be? In response, Agar favours something what he calls “rational biological conservatism” (pg. 57) where we set limits on how intelligent we can become in light of the fact that it will never be rational to us for human beings to completely upload their minds onto computers.
  • Agar then proceeds to argue that in the pursuit of Kurzweil enhanced capacities and capabilities we might accidentally undermine capacities of equal value. This line of argument would find much sympathy from those who consider human organisms in “ecological” terms, representing a profound interconnectedness which when interfered with presents a series of unknown and unexpected consequences. In other words, our specifies-specific form of intelligence may well be linked to species-specific form of desire. Thus, if we start building upon and enhancing our capacity to protect and promote deeply held convictions and beliefs then due to the interconnectedness, it may well affect and remove our desire to perform such activities (page 70). Agar’s subsequent discussion and reference to the work of Jerry Foder, philosopher and cognitive scientist is particularly helpful in terms of the functioning of the mind by modules and the implications of human-friendly AI verses human-unfriendly AI.
  • In terms of the author’s discussion of Aubrey de Grey, what is refreshing to read from the outset is the author’s clear grasp of Aubrey’s ideas and motivation. Some make the mistake of thinking he is the man who wants to live forever, when in actual fact this is not the case.  De Grey wants to reverse the ageing process - Strategies for Engineered Negligible Senescence (SENS) so that people are living longer and healthier lives. Establishing this clear distinction affords the author the opportunity to offer more grounded critiques of de Grey’s than some of his other critics. The author makes plain that de Grey’s immediate goal is to achieve longevity escape velocity (LEV), where anti-ageing therapies add years to life expectancy faster than age consumes them.
  • In weighing up the benefits of living significantly longer lives, Agar posits a compelling argument that I had not fully seen before. In terms of risk, those radically enhanced to live longer may actually be the most risk adverse and fearful people to live. Taking the example of driving a car, a forty year-old senescing human being who gets into their car to drive to work and is involved in a fatal accident “stands to lose, at most, a few healthy, youthful years and a slightly larger number of years with reduced quality” (p.116). In stark contrast should a negligibly senescent being who drives a car and is involved in an accident resulting in their death, stands to lose on average one thousand, healthy, youthful years (p.116).  
  • De Grey’s response to this seems a little flippant; with the end of ageing comes an increased sense of risk-aversion so the desire for risky activity such as driving will no longer be prevalent. Moreover, plus because we are living for longer we will not be in such a hurry to get to places!  Virtual reality comes into its own at this point as a means by which the negligibly senescent being ‘adrenaline junkie’ can be engaged with activities but without the associated risks. But surely the risk is part of the reason why they would want to engage in snow boarding, bungee jumping et al in the first place. De Grey’s strategy seemingly fails to appreciate the extent to which human beings want “direct” contact with the “real” world.
  • Continuing this idea further though, Agar’s subsequent discussion of the role of fire-fighters is an interesting one.  A negligibly senescent fire fighter may stand to loose more when they are trapped in a burning inferno but being negligibly senescent means that they are better fire-fighters by virtue of increase vitality. Having recently heard de Grey speak and had the privilege of discussing his ideas further with him, Agar’s discussion of De Grey were a particular highlight of the book and made for an engaging discussion. Whilst expressing concern and doubt in relation to De Grey’s ideas, Agar is nevertheless quick and gracious enough to acknowledge that if such therapies could be achieved then De Grey is probably the best person to comment on and achieve such therapies given the depth of knowledge and understanding that he has built up in this area.
« First ‹ Previous 61 - 80 of 85 Next ›
Showing 20 items per page