Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged support

Rss Feed Group items tagged

seth kutcher

The Best Remote PC Support I Ever Had - 1 views

The Remote PC Support Now excellent remote PC support services are the best. They have skilled computer tech professionals who can fix your PC while you wait or just go back to work or just simpl...

remote PC support

started by seth kutcher on 12 Sep 11 no follow-up yet
Weiye Loh

The future of customer support: Outsourcing is so last year | The Economist - 0 views

  • Gartner, the research company, estimates that using communities to solve support issues can reduce costs by up to 50%. When TomTom, a maker of satellite-navigation systems, switched on social support, members handled 20,000 cases in its first two weeks and saved it around $150,000. Best Buy, an American gadget retailer, values its 600,000 users at $5m annually. 
  •  
    "Unsourcing", as the new trend has been dubbed, involves companies setting up online communities to enable peer-to-peer support among users. Instead of speaking with a faceless person thousands of miles away, customers' problems are answered by individuals in the same country who have bought and used the same products. This happens either on the company's own website or on social networks like Facebook and Twitter, and the helpers are generally not paid anything for their efforts.
Weiye Loh

Epiphenom: Suicide in American colleges - the importance of existential well being - 0 views

  • Lindsay Taliaferro, a doctoral candidate at the University of Florida, surveyed over 400 of her fellow students. The response rate was high - around 90%. The good news is that, for the most part, they were not suicidal! On average, they scored 11 on a 70-point scale of suicidal thinking.
  • as expected, those who reported high levels of religious well being (e.g. that they find strength or support from God) or involvement in religious activities had fewer suicidal thoughts.
  • She also asked how hopeless or depressed the students felt, and how much social support they felt they got. When she took this into account, the effects of religion disappeared. What this suggests is that religious well-being and involvement have whatever effects they have by reducing hopelessness and depression, and by increasing social support.
  • ...4 more annotations...
  • what is surprising is that she found a third factor that was even more important that religion and social support. That factor was "Existential Well-Being", which relates to things such as feeling fulfilled and satisfied with life, and finding meaning and purpose in life.
  • Existential Well-Being remained important even after taking into account hopelessness, depression and social support. In other words, even if you feel hopeless, depressed, and alone, existential well-being (unlike religious well-being) can ease suicidal thoughts.
  • this does seem to fit in with other studies which have shown that spirituality does not reduce suicidal thoughts,and that feeling close to God is linked to a history of depression, whereas existential well being is linked to dramatically less depression.
  • Results from the present investigation indicate that many college students did not demonstrate high involvement in organized religion. Yet they reported high levels of spiritual well-being, especially existential well-being, and low levels of suicidal ideation. Furthermore, results highlighted existential well-being as an important factor associated with lower levels of suicidal ideation among college students. Overall, these findings suggest that a strategy for reducing distress and preventing suicide among college students may involve exploring mechanisms that nurture a sense of meaning in life in individuals for whom organized religion remains unimportant. Health professionals may have more success in improving young people’s sense of meaning and purpose by methods other than an increase in faith, participation in organized religion, or other indicators of religiosity.
  •  
    Suicide
Weiye Loh

Adventures in Flay-land: Scepticism versus Denialism - Delingpole Part II - 0 views

  • wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism.
  • It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre: Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
  • Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.
  • ...11 more annotations...
  • So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.
  • Delingpole makes the following claim: Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
  • So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.
  • MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study.
  • This Wikipedia article explains in better laymens terms how the MM claims are faulty.
  • It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are
  • She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC)
  • I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group 1 (the climate scientists). The science revealed by Working Group 1 is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance.
  • Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT.
  • According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus.
  • The second to last paragraph of Delingpole's article asks this: If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"
Weiye Loh

New voting methods and fair elections : The New Yorker - 0 views

  • history of voting math comes mainly in two chunks: the period of the French Revolution, when some members of France’s Academy of Sciences tried to deduce a rational way of conducting elections, and the nineteen-fifties onward, when economists and game theorists set out to show that this was impossible
  • The first mathematical account of vote-splitting was given by Jean-Charles de Borda, a French mathematician and a naval hero of the American Revolutionary War. Borda concocted examples in which one knows the order in which each voter would rank the candidates in an election, and then showed how easily the will of the majority could be frustrated in an ordinary vote. Borda’s main suggestion was to require voters to rank candidates, rather than just choose one favorite, so that a winner could be calculated by counting points awarded according to the rankings. The key idea was to find a way of taking lower preferences, as well as first preferences, into account.Unfortunately, this method may fail to elect the majority’s favorite—it could, in theory, elect someone who was nobody’s favorite. It is also easy to manipulate by strategic voting.
  • If the candidate who is your second preference is a strong challenger to your first preference, you may be able to help your favorite by putting the challenger last. Borda’s response was to say that his system was intended only for honest men.
  • ...15 more annotations...
  • After the Academy dropped Borda’s method, it plumped for a simple suggestion by the astronomer and mathematician Pierre-Simon Laplace, who was an important contributor to the theory of probability. Laplace’s rule insisted on an over-all majority: at least half the votes plus one. If no candidate achieved this, nobody was elected to the Academy.
  • Another early advocate of proportional representation was John Stuart Mill, who, in 1861, wrote about the critical distinction between “government of the whole people by the whole people, equally represented,” which was the ideal, and “government of the whole people by a mere majority of the people exclusively represented,” which is what winner-takes-all elections produce. (The minority that Mill was most concerned to protect was the “superior intellects and characters,” who he feared would be swamped as more citizens got the vote.)
  • The key to proportional representation is to enlarge constituencies so that more than one winner is elected in each, and then try to align the share of seats won by a party with the share of votes it receives. These days, a few small countries, including Israel and the Netherlands, treat their entire populations as single constituencies, and thereby get almost perfectly proportional representation. Some places require a party to cross a certain threshold of votes before it gets any seats, in order to filter out extremists.
  • The main criticisms of proportional representation are that it can lead to unstable coalition governments, because more parties are successful in elections, and that it can weaken the local ties between electors and their representatives. Conveniently for its critics, and for its defenders, there are so many flavors of proportional representation around the globe that you can usually find an example of whatever point you want to make. Still, more than three-quarters of the world’s rich countries seem to manage with such schemes.
  • The alternative voting method that will be put to a referendum in Britain is not proportional representation: it would elect a single winner in each constituency, and thus steer clear of what foreigners put up with. Known in the United States as instant-runoff voting, the method was developed around 1870 by William Ware
  • In instant-runoff elections, voters rank all or some of the candidates in order of preference, and votes may be transferred between candidates. The idea is that your vote may count even if your favorite loses. If any candidate gets more than half of all the first-preference votes, he or she wins, and the game is over. But, if there is no majority winner, the candidate with the fewest first-preference votes is eliminated. Then the second-preference votes of his or her supporters are distributed to the other candidates. If there is still nobody with more than half the votes, another candidate is eliminated, and the process is repeated until either someone has a majority or there are only two candidates left, in which case the one with the most votes wins. Third, fourth, and lower preferences will be redistributed if a voter’s higher preferences have already been transferred to candidates who were eliminated earlier.
  • At first glance, this is an appealing approach: it is guaranteed to produce a clear winner, and more voters will have a say in the election’s outcome. Look more closely, though, and you start to see how peculiar the logic behind it is. Although more people’s votes contribute to the result, they do so in strange ways. Some people’s second, third, or even lower preferences count for as much as other people’s first preferences. If you back the loser of the first tally, then in the subsequent tallies your second (and maybe lower) preferences will be added to that candidate’s first preferences. The winner’s pile of votes may well be a jumble of first, second, and third preferences.
  • Such transferrable-vote elections can behave in topsy-turvy ways: they are what mathematicians call “non-monotonic,” which means that something can go up when it should go down, or vice versa. Whether a candidate who gets through the first round of counting will ultimately be elected may depend on which of his rivals he has to face in subsequent rounds, and some votes for a weaker challenger may do a candidate more good than a vote for that candidate himself. In short, a candidate may lose if certain voters back him, and would have won if they hadn’t. Supporters of instant-runoff voting say that the problem is much too rare to worry about in real elections, but recent work by Robert Norman, a mathematician at Dartmouth, suggests otherwise. By Norman’s calculations, it would happen in one in five close contests among three candidates who each have between twenty-five and forty per cent of first-preference votes. With larger numbers of candidates, it would happen even more often. It’s rarely possible to tell whether past instant-runoff elections have gone topsy-turvy in this way, because full ballot data aren’t usually published. But, in Burlington’s 2006 and 2009 mayoral elections, the data were published, and the 2009 election did go topsy-turvy.
  • Kenneth Arrow, an economist at Stanford, examined a set of requirements that you’d think any reasonable voting system could satisfy, and proved that nothing can meet them all when there are more than two candidates. So designing elections is always a matter of choosing a lesser evil. When the Royal Swedish Academy of Sciences awarded Arrow a Nobel Prize, in 1972, it called his result “a rather discouraging one, as regards the dream of a perfect democracy.” Szpiro goes so far as to write that “the democratic world would never be the same again,
  • There is something of a loophole in Arrow’s demonstration. His proof applies only when voters rank candidates; it would not apply if, instead, they rated candidates by giving them grades. First-past-the-post voting is, in effect, a crude ranking method in which voters put one candidate in first place and everyone else last. Similarly, in the standard forms of proportional representation voters rank one party or group of candidates first, and all other parties and candidates last. With rating methods, on the other hand, voters would give all or some candidates a score, to say how much they like them. They would not have to say which is their favorite—though they could in effect do so, by giving only him or her their highest score—and they would not have to decide on an order of preference for the other candidates.
  • One such method is widely used on the Internet—to rate restaurants, movies, books, or other people’s comments or reviews, for example. You give numbers of stars or points to mark how much you like something. To convert this into an election method, count each candidate’s stars or points, and the winner is the one with the highest average score (or the highest total score, if voters are allowed to leave some candidates unrated). This is known as range voting, and it goes back to an idea considered by Laplace at the start of the nineteenth century. It also resembles ancient forms of acclamation in Sparta. The more you like something, the louder you bash your shield with your spear, and the biggest noise wins. A recent variant, developed by two mathematicians in Paris, Michel Balinski and Rida Laraki, uses familiar language rather than numbers for its rating scale. Voters are asked to grade each candidate as, for example, “Excellent,” “Very Good,” “Good,” “Insufficient,” or “Bad.” Judging politicians thus becomes like judging wines, except that you can drive afterward.
  • Range and approval voting deal neatly with the problem of vote-splitting: if a voter likes Nader best, and would rather have Gore than Bush, he or she can approve Nader and Gore but not Bush. Above all, their advocates say, both schemes give voters more options, and would elect the candidate with the most over-all support, rather than the one preferred by the largest minority. Both can be modified to deliver forms of proportional representation.
  • Whether such ideas can work depends on how people use them. If enough people are carelessly generous with their approval votes, for example, there could be some nasty surprises. In an unlikely set of circumstances, the candidate who is the favorite of more than half the voters could lose. Parties in an approval election might spend less time attacking their opponents, in order to pick up positive ratings from rivals’ supporters, and critics worry that it would favor bland politicians who don’t stand for anything much. Defenders insist that such a strategy would backfire in subsequent elections, if not before, and the case of Ronald Reagan suggests that broad appeal and strong views aren’t mutually exclusive.
  • Why are the effects of an unfamiliar electoral system so hard to puzzle out in advance? One reason is that political parties will change their campaign strategies, and voters the way they vote, to adapt to the new rules, and such variables put us in the realm of behavior and culture. Meanwhile, the technical debate about electoral systems generally takes place in a vacuum from which voters’ capriciousness and local circumstances have been pumped out. Although almost any alternative voting scheme now on offer is likely to be better than first past the post, it’s unrealistic to think that one voting method would work equally well for, say, the legislature of a young African republic, the Presidency of an island in Oceania, the school board of a New England town, and the assembly of a country still scarred by civil war. If winner takes all is a poor electoral system, one size fits all is a poor way to pick its replacements.
  • Mathematics can suggest what approaches are worth trying, but it can’t reveal what will suit a particular place, and best deliver what we want from a democratic voting system: to create a government that feels legitimate to people—to reconcile people to being governed, and give them reason to feel that, win or lose (especially lose), the game is fair.
  •  
    WIN OR LOSE No voting system is flawless. But some are less democratic than others. by Anthony Gottlieb
Weiye Loh

Religion: Faith in science : Nature News - 0 views

  • The Templeton Foundation claims to be a friend of science. So why does it make so many researchers uneasy?
  • With a current endowment estimated at US$2.1 billion, the organization continues to pursue Templeton's goal of building bridges between science and religion. Each year, it doles out some $70 million in grants, more than $40 million of which goes to research in fields such as cosmology, evolutionary biology and psychology.
  • however, many scientists find it troubling — and some see it as a threat. Jerry Coyne, an evolutionary biologist at the University of Chicago, Illinois, calls the foundation "sneakier than the creationists". Through its grants to researchers, Coyne alleges, the foundation is trying to insinuate religious values into science. "It claims to be on the side of science, but wants to make faith a virtue," he says.
  • ...25 more annotations...
  • But other researchers, both with and without Templeton grants, say that they find the foundation remarkably open and non-dogmatic. "The Templeton Foundation has never in my experience pressured, suggested or hinted at any kind of ideological slant," says Michael Shermer, editor of Skeptic, a magazine that debunks pseudoscience, who was hired by the foundation to edit an essay series entitled 'Does science make belief in God obsolete?'
  • The debate highlights some of the challenges facing the Templeton Foundation after the death of its founder in July 2008, at the age of 95.
  • With the help of a $528-million bequest from Templeton, the foundation has been radically reframing its research programme. As part of that effort, it is reducing its emphasis on religion to make its programmes more palatable to the broader scientific community. Like many of his generation, Templeton was a great believer in progress, learning, initiative and the power of human imagination — not to mention the free-enterprise system that allowed him, a middle-class boy from Winchester, Tennessee, to earn billions of dollars on Wall Street. The foundation accordingly allocates 40% of its annual grants to programmes with names such as 'character development', 'freedom and free enterprise' and 'exceptional cognitive talent and genius'.
  • Unlike most of his peers, however, Templeton thought that the principles of progress should also apply to religion. He described himself as "an enthusiastic Christian" — but was also open to learning from Hinduism, Islam and other religious traditions. Why, he wondered, couldn't religious ideas be open to the type of constructive competition that had produced so many advances in science and the free market?
  • That question sparked Templeton's mission to make religion "just as progressive as medicine or astronomy".
  • Early Templeton prizes had nothing to do with science: the first went to the Catholic missionary Mother Theresa of Calcutta in 1973.
  • By the 1980s, however, Templeton had begun to realize that fields such as neuroscience, psychology and physics could advance understanding of topics that are usually considered spiritual matters — among them forgiveness, morality and even the nature of reality. So he started to appoint scientists to the prize panel, and in 1985 the award went to a research scientist for the first time: Alister Hardy, a marine biologist who also investigated religious experience. Since then, scientists have won with increasing frequency.
  • "There's a distinct feeling in the research community that Templeton just gives the award to the most senior scientist they can find who's willing to say something nice about religion," says Harold Kroto, a chemist at Florida State University in Tallahassee, who was co-recipient of the 1996 Nobel Prize in Chemistry and describes himself as a devout atheist.
  • Yet Templeton saw scientists as allies. They had what he called "the humble approach" to knowledge, as opposed to the dogmatic approach. "Almost every scientist will agree that they know so little and they need to learn," he once said.
  • Templeton wasn't interested in funding mainstream research, says Barnaby Marsh, the foundation's executive vice-president. Templeton wanted to explore areas — such as kindness and hatred — that were not well known and did not attract major funding agencies. Marsh says Templeton wondered, "Why is it that some conflicts go on for centuries, yet some groups are able to move on?"
  • Templeton's interests gave the resulting list of grants a certain New Age quality (See Table 1). For example, in 1999 the foundation gave $4.6 million for forgiveness research at the Virginia Commonwealth University in Richmond, and in 2001 it donated $8.2 million to create an Institute for Research on Unlimited Love (that is, altruism and compassion) at Case Western Reserve University in Cleveland, Ohio. "A lot of money wasted on nonsensical ideas," says Kroto. Worse, says Coyne, these projects are profoundly corrupting to science, because the money tempts researchers into wasting time and effort on topics that aren't worth it. If someone is willing to sell out for a million dollars, he says, "Templeton is there to oblige him".
  • At the same time, says Marsh, the 'dean of value investing', as Templeton was known on Wall Street, had no intention of wasting his money on junk science or unanswerables such as whether God exists. So before pursuing a scientific topic he would ask his staff to get an assessment from appropriate scholars — a practice that soon evolved into a peer-review process drawing on experts from across the scientific community.
  • Because Templeton didn't like bureaucracy, adds Marsh, the foundation outsourced much of its peer review and grant giving. In 1996, for example, it gave $5.3 million to the American Association for the Advancement of Science (AAAS) in Washington DC, to fund efforts that work with evangelical groups to find common ground on issues such as the environment, and to get more science into seminary curricula. In 2006, Templeton gave $8.8 million towards the creation of the Foundational Questions Institute (FQXi), which funds research on the origins of the Universe and other fundamental issues in physics, under the leadership of Anthony Aguirre, an astrophysicist at the University of California, Santa Cruz, and Max Tegmark, a cosmologist at the Massachusetts Institute of Technology in Cambridge.
  • But external peer review hasn't always kept the foundation out of trouble. In the 1990s, for example, Templeton-funded organizations gave book-writing grants to Guillermo Gonzalez, an astrophysicist now at Grove City College in Pennsylvania, and William Dembski, a philosopher now at the Southwestern Baptist Theological Seminary in Fort Worth, Texas. After obtaining the grants, both later joined the Discovery Institute — a think-tank based in Seattle, Washington, that promotes intelligent design. Other Templeton grants supported a number of college courses in which intelligent design was discussed. Then, in 1999, the foundation funded a conference at Concordia University in Mequon, Wisconsin, in which intelligent-design proponents confronted critics. Those awards became a major embarrassment in late 2005, during a highly publicized court fight over the teaching of intelligent design in schools in Dover, Pennsylvania. A number of media accounts of the intelligent design movement described the Templeton Foundation as a major supporter — a charge that Charles Harper, then senior vice-president, was at pains to deny.
  • Some foundation officials were initially intrigued by intelligent design, Harper told The New York Times. But disillusionment set in — and Templeton funding stopped — when it became clear that the theory was part of a political movement from the Christian right wing, not science. Today, the foundation website explicitly warns intelligent-design researchers not to bother submitting proposals: they will not be considered.
  • Avowedly antireligious scientists such as Coyne and Kroto see the intelligent-design imbroglio as a symptom of their fundamental complaint that religion and science should not mix at all. "Religion is based on dogma and belief, whereas science is based on doubt and questioning," says Coyne, echoing an argument made by many others. "In religion, faith is a virtue. In science, faith is a vice." The purpose of the Templeton Foundation is to break down that wall, he says — to reconcile the irreconcilable and give religion scholarly legitimacy.
  • Foundation officials insist that this is backwards: questioning is their reason for being. Religious dogma is what they are fighting. That does seem to be the experience of many scientists who have taken Templeton money. During the launch of FQXi, says Aguirre, "Max and I were very suspicious at first. So we said, 'We'll try this out, and the minute something smells, we'll cut and run.' It never happened. The grants we've given have not been connected with religion in any way, and they seem perfectly happy about that."
  • John Cacioppo, a psychologist at the University of Chicago, also had concerns when he started a Templeton-funded project in 2007. He had just published a paper with survey data showing that religious affiliation had a negative correlation with health among African-Americans — the opposite of what he assumed the foundation wanted to hear. He was bracing for a protest when someone told him to look at the foundation's website. They had displayed his finding on the front page. "That made me relax a bit," says Cacioppo.
  • Yet, even scientists who give the foundation high marks for openness often find it hard to shake their unease. Sean Carroll, a physicist at the California Institute of Technology in Pasadena, is willing to participate in Templeton-funded events — but worries about the foundation's emphasis on research into 'spiritual' matters. "The act of doing science means that you accept a purely material explanation of the Universe, that no spiritual dimension is required," he says.
  • It hasn't helped that Jack Templeton is much more politically and religiously conservative than his father was. The foundation shows no obvious rightwards trend in its grant-giving and other activities since John Templeton's death — and it is barred from supporting political activities by its legal status as a not-for-profit corporation. Still, many scientists find it hard to trust an organization whose president has used his personal fortune to support right-leaning candidates and causes such as the 2008 ballot initiative that outlawed gay marriage in California.
  • Scientists' discomfort with the foundation is probably inevitable in the current political climate, says Scott Atran, an anthropologist at the University of Michigan in Ann Arbor. The past 30 years have seen the growing power of the Christian religious right in the United States, the rise of radical Islam around the world, and religiously motivated terrorist attacks such as those in the United States on 11 September 2001. Given all that, says Atran, many scientists find it almost impossible to think of religion as anything but fundamentalism at war with reason.
  • the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'
  • Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists.
  • The peer-review and grant-making system has also been revamped: whereas in the past the foundation ran an informal mix of projects generated by Templeton and outside grant seekers, the system is now organized around an annual list of explicit funding priorities.
  • The foundation is still a work in progress, says Jack Templeton — and it always will be. "My father believed," he says, "we were all called to be part of an ongoing creative process. He was always trying to make people think differently." "And he always said, 'If you're still doing today what you tried to do two years ago, then you're not making progress.'" 
Weiye Loh

Cancer resembles life 1 billion years ago, say astrobiologists - microbiology, genomics... - 0 views

  • astrobiologists, working with oncologists in the US, have suggested that cancer resembles ancient forms of life that flourished between 600 million and 1 billion years ago.
  • Read more about what this discovery means for cancer research.
  • The genes that controlled the behaviour of these early multicellular organisms still reside within our own cells, managed by more recent genes that keep them in check.It's when these newer controlling genes fail that the older mechanisms take over, and the cell reverts to its earlier behaviours and grows out of control.
  • ...11 more annotations...
  • The new theory, published in the journal Physical Biology, has been put forward by two leading figures in the world of cosmology and astrobiology: Paul Davies, director of the Beyond Center for Fundamental Concepts in Science, Arizona State University; and Charles Lineweaver, from the Australian National University.
  • According to Lineweaver, this suggests that cancer is an atavism, or an evolutionary throwback.
  • In the paper, they suggest that a close look at cancer shows similarities with early forms of multicellular life.
  • “Unlike bacteria and viruses, cancer has not developed the capacity to evolve into new forms. In fact, cancer is better understood as the reversion of cells to the way they behaved a little over one billion years ago, when humans were nothing more than loose-knit colonies of only partially differentiated cells. “We think that the tumours that develop in cancer patients today take the same form as these simple cellular structures did more than a billion years ago,” he said.
  • One piece of evidence to support this theory is that cancers appear in virtually all metazoans, with the notable exception of the bizarre naked mole rat."This quasi-ubiquity suggests that the mechanisms of cancer are deep-rooted in evolutionary history, a conjecture that receives support from both paleontology and genetics," they write.
  • the genes that controlled this early multi-cellular form of life are like a computer operating system's 'safe mode', and when there are failures or mutations in the more recent genes that manage the way cells specialise and interact to form the complex life of today, then the earlier level of programming takes over.
  • Their notion is in contrast to a prevailing theory that cancer cells are 'rogue' cells that evolve rapidly within the body, overcoming the normal slew of cellular defences.
  • However, Davies and Lineweaver point out that cancer cells are highly cooperative with each other, if competing with the host's cells. This suggests a pre-existing complexity that is reminiscent of early multicellular life.
  • cancers' manifold survival mechanisms are predictable, and unlikely to emerge spontaneously through evolution within each individual in such a consistent way.
  • The good news is that this means combating cancer is not necessarily as complex as if the cancers were rogue cells evolving new and novel defence mechanisms within the body.Instead, because cancers fall back on the same evolved mechanisms that were used by early life, we can expect them to remain predictable, thus if they're susceptible to treatment, it's unlikely they'll evolve new ways to get around it.
  • If the atavism hypothesis is correct, there are new reasons for optimism," they write.
  •  
    Feature: Inside DNA vaccines bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Alexion acquires technology for MoCD therapy More > Most Popular Media Releases Cancer resembles life 1 billion years ago, say astrobiologists Feature: The challenge of a herpes simplex vaccine Feature: Proteomics power of pawpaw bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Immune system boosting hormone might lead to HIV cure Biotechnology Directory Company Profile Check out this company's profile and more in the Biotechnology Directory! Biotechnology Directory Find company by name Find company by category Latest Jobs Senior Software Developer / Java Analyst Programm App Support Developer - Java / J2ee Solutions Consultant - VIC Technical Writer Product Manager (Fisheye/Crucible)   BUYING GUIDES Portable Multimedia Players Digital Cameras Digital Video Cameras LATEST PRODUCTS HTC Wildfire S Android phone (preview) Panasonic LUMIX DMC-GH2 digital camera HTC Desire S Android phone (preview) Qld ICT minister Robert Schwarten retires Movie piracy costs Aus economy $1.37 billion in 12 months: AFACT Wireless smartphones essential to e-health: CSIRO Aussie outsourcing CRM budgets to soar in 2011: Ovum Federal government to evaluate education revolution targets Business continuity planning - more than just disaster recovery Proving the value of IT - Part one 5 open source security projects to watch In-memory computing Information security in 2011 EFA shoots down 'unproductive' AFACT movie piracy study In Pictures: IBM hosts Galactic dinner Emerson Network Power launches new infrastructure solutions Consumers not smart enough for smartphones? Google one-ups Apple online media subscription service M2M offerings expand as more machines go online India cancels satellite spectrum deal after controversy Lenovo profit rises in Q3 on strong PC sales in China Taiwan firm to supply touch sensors to Samsung HP regains top position in India's PC market Copyright 20
Weiye Loh

Libel Chill and Me « Skepticism « Critical Thinking « Skeptic North - 0 views

  • Skeptics may by now be very familiar with recent attempts in Canada to ban wifi from public schools and libraries.  In short: there is no valid scientific reason to be worried about wifi.  It has also been revealed that the chief scientists pushing the wifi bans have been relying on poor data and even poorer studies.  By far the vast majority of scientific data that currently exists supports the conclusion that wifi and cell phone signals are perfectly safe.
  • So I wrote about that particular topic in the summer.  It got some decent coverage, but the fear mongering continued. I wrote another piece after I did a little digging into one of the main players behind this, one Rodney Palmer, and I discovered some decidedly pseudo-scientific tendencies in his past, as well as some undisclosed collusion.
  • One night I came home after a long day at work, a long commute, and a phone call that a beloved family pet was dying, and will soon be in significant pain.  That is the state I was in when I read the news about Palmer and Parliamentary committee.
  • ...18 more annotations...
  • That’s when I wrote my last significant piece for Skeptic North.  Titled, “Rodney Palmer: When Pseudoscience and Narcissism Collide,” it was a fiery take-down of every claim I heard Palmer speak before the committee, as well as reiterating some of his undisclosed collusion, unethical media tactics, and some reasons why he should not be considered an expert.
  • This time, the article got a lot more reader eyeballs than anything I had ever written for this blog (or my own) and it also caught the attention of someone on a school board which was poised to vote on wifi.  In these regards: Mission very accomplished.  I finally thought that I might be able to see some people in the media start to look at Palmer’s claims with a more critical eye than they had been previously, and I was flattered at the mountain of kind words, re-tweets, reddit comments and Facebook “likes.”
  • The comments section was mostly supportive of my article, and they were one of the few things that kept me from hiding in a hole for six weeks.  There were a few comments in opposition to what I wrote, some sensible, most incoherent rambling (one commenter, when asked for evidence, actually linked to a YouTube video which they referred to as “peer reviewed”)
  • One commenter was none other than the titular subject of the post, Rodney Palmer himself.  Here is a screen shot of what he said: Screen shot of the Libel/Slander threat.
  • Knowing full well the story of the libel threat against Simon Singh, I’ve always thought that if ever a threat like that came my way, I’d happily beat it back with the righteous fury and good humour of a person with the facts on their side.  After all, if I’m wrong, you’d be able to prove me wrong, rather than try to shut me up with a threat of a lawsuit.  Indeed, I’ve been through a similar situation once before, so I should be an old hat at this! Let me tell you friends, it’s not that easy.  In fact, it’s awful.  Outside observers could easily identify that Palmer had no case against me, but that was still cold comfort to me.  It is a very stressful situation to find yourself in.
  • The state of libel and slander laws in this country are such that a person can threaten a lawsuit without actually threatening a lawsuit.  There is no need to hire a lawyer to investigate the claims, look into who I am, where I live, where I work, and issue a carefully worded threatening letter demanding compliance.  All a person has to say is some version of  “Libel.  Slander.  Hmmmm….,” and that’s enough to spook a lot of people into backing off. It’s a modern day bogeyman.  They don’t have to prove it.  They don’t have to act on it.  A person or organization just has to say “BOO!” with sufficient seriousness, and unless you’ve got a good deal of editorial and financial support, discussion goes out the window. Libel Chill refers to the ‘chilling effect’ that the possibility of a libel/slander lawsuit has.  If a person is scared they might get sued, then they won’t even comment on a piece at all.  In my case, I had already commented three times on the wifi scaremongering, but this bogus threat against me was surely a major contributing factor to my not commenting again.
  • I ceased to discuss anything in the comment thread of the original article, and even shied away from other comment threads, calling me out.  I learned a great deal about the wifi/EMF issue since I wrote the article, but I did not comment on any of it, because I knew that Palmer and his supporters were watching me like a hawk (sorry to stretch the simile), and would likely try to silence me again.  I couldn’t risk a lawsuit.  Even though I knew there was no case against me, I couldn’t afford a lawyer just to prove that I didn’t do anything illegal.
  • The Libel and Slanders Act of Ontario, 1990 hasn’t really caught up with the internet.  There isn’t a clear precedent that defines a blog post, Twitter feed or Facebook post as falling under the umbrella of “broadcast,” which is what the bill addresses.  If I had written the original article in print, Palmer would have had six weeks to file suit against me.  But the internet is only kind of considered ‘broadcast.’  So it could be just six weeks, but he could also have up to two years to act and get a lawyer after me.  Truth is, there’s not a clear demarcation point for our Canadian legal system.
  • Libel laws in Canada are somewhere in between the Plaintiff-favoured UK system, and the Defendant-favoured US system.  On the one hand, if Palmer chose to incur the expense and time to hire a lawyer and file suit against me, the burden of proof would be on me to prove that I did not act with malice.  Easy peasy.  On the other hand, I would have a strong case that I acted in the best interests of Canadians, which would fall under the recent Supreme Court of Canada decision on protecting what has been termed, “Responsible Communication.”  The Supreme Court of Canada decision does not grant bloggers immunity from libel and slander suits, but it is a healthy dose of welcome freedom to discuss issues of importance to Canadians.
  • Palmer himself did not specify anything against me in his threat.  There was nothing particular that he complained about, he just said a version of “Libel and Slander!” at me.  He may as well have said “Boo!”
  • This is not a DBAD discussion (although I wholeheartedly agree with Phil Plait there). 
  • If you’d like to boil my lessons down to an acronym, I suppose the best one would be DBRBC: Don’t be reckless. Be Careful.
  • I wrote a piece that, although it was not incorrect in any measurable way, was written with fire and brimstone, piss and vinegar.  I stand by my piece, but I caution others to be a little more careful with the language they use.  Not because I think it is any less or more tactically advantageous (because I’m not sure anyone can conclusively demonstrate that being an aggressive jerk is an inherently better or worse communication tool), but because the risks aren’t always worth it.
  • I’m not saying don’t go after a person.  There are egomaniacs out there who deserve to be called out and taken down (verbally, of course).  But be very careful with what you say.
  • ask yourself some questions first: 1) What goal(s) are you trying to accomplish with this piece? Are you trying to convince people that there is a scientific misunderstanding here?  Are you trying to attract the attention of the mainstream media to a particular facet of the issue?  Are you really just pissed off and want to vent a little bit?  Is this article a catharsis, or is it communicative?  Be brutally honest with your intentions, it’s not as easy as you think.  Venting is okay.  So is vicious venting, but be careful what you dress it up as.
  • 2) In order to attain your goals, did you use data, or personalities?  If the former, are you citing the best, most current data you have available to you? Have you made a reasonable effort to check your data against any conflicting data that might be out there? If the latter, are you providing a mountain of evidence, and not just projecting onto personalities?  There is nothing inherently immoral or incorrect with going after the personalities.  But it is a very risky undertaking. You have to be damn sure you know what you’re talking about, and damn ready to defend yourself.  If you’re even a little loose with your claims, you will be called out for it, and a legal threat is very serious and stressful. So if you’re going after a personality, is it worth it?
  • 3) Are you letting the science speak for itself?  Are you editorializing?  Are you pointing out what part of your piece is data and what part is your opinion?
  • 4) If this piece was written in anger, frustration, or otherwise motivated by a powerful emotion, take a day.  Let your anger subside.  It will.  There are many cathartic enterprises out there, and you don’t need to react to the first one that comes your way.  Let someone else read your work before you share it with the internet.  Cooler heads definitely do think more clearly.
Weiye Loh

Want people to get on board with a shift to clean energy? Shield them from economic ins... - 0 views

  • The reality is that a bold new energy and climate change policy would inevitably result in dislocations in certain industries and upset long-established ways of life in many regions; in addition, it would lead to higher prices for basic commodities such as gas, home heating oil, and food. In societies where there are strong social safety nets -- universal healthcare, universal preschool, strong support for new parents, significant investments in public transportation, and sustained support for higher education -- the changes wrought by a paradigm shift in energy will tend not to result in hugely destabilizing effects across whole towns and communities. In fact, with good planning and investments in critical infrastructure, strong environmental policies can result in overall improvements in the quality of life for nearly everyone. Throughout much of the developed world, citizens are willing to pay prices for gasoline that would lead to riots in American streets, because they know that the government revenue raised by high gas taxes is used for programs that directly benefit them. In other words, ten-dollar-a-gallon gas isn’t such a big deal when everyone has great healthcare, great public transportation, and free high-quality schooling.
  • Americans are so battered and anxious right now. Median wages are flat, unemployment is high, politics is paralyzed. Middle-class families are one health problem away from ruin, and when they fall, there's no net. That kind of insecurity, as much as anything, explains the American reticence to launch bold new social programs.
  • Michael Levi points to a fantastic piece by Nassim Taleb and Mark Blyth wherein they approach a similar subject from a seemingly contrary angle, arguing that government efforts to suppress social and economic volatility can backfire. Without the experience of adjusting to small shocks as they come, we won't be prepared when the big shocks arrive:
  • ...3 more annotations...
  • Complex systems that have artificially suppressed volatility tend to become extremely fragile, while at the same time exhibiting no visible risks. In fact, they tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policymakers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite. These artificially constrained systems become prone to "Black Swans" -- that is, they become extremely vulnerable to large-scale events that lie far from the statistical norm and were largely unpredictable to a given set of observers. Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • If a society provides a basic measure of health and economic security for its citizens, its citizens will be more tolerant of a little volatility/risk/ambition in its social and economic policy.
  • This gets at why I think its extremely difficult to reconcile modern-day conservatism and serious efforts to address climate change (and future resource shortages, and other various other sources of long-term risk). The U.S. conservative politic program is devoted to increasing economic and social insecurity for average people and decreasing it for wealthy business owners. That is roughly the opposite of the approach you'd want to take if you want to increase society's resilience to the dangers approaching.
  •  
    First there's this extremely smart piece from economist Jason Scorse. It makes an argument that I wish had gotten much more attention during the fight over the climate bill, to wit: "people are much more willing to support environmental policies that come with large risks and disruptions to their way of life when other policies are in place to shield them from excessive risk and instability."
joanne ye

Measuring the effectiveness of online activism - 2 views

Reference: Krishnan, S. (2009, June 21). Measuring the effectiveness of online activism. The Hindu. Retrieved September 24, 2009, from Factiva. (Article can be found at bottom of the post) Summary...

online activism freedom control

started by joanne ye on 24 Sep 09 no follow-up yet
Weiye Loh

What is the role of the state? | Martin Wolf's Exchange | FT.com - 0 views

  • This question has concerned western thinkers at least since Plato (5th-4th century BCE). It has also concerned thinkers in other cultural traditions: Confucius (6th-5th century BCE); China’s legalist tradition; and India’s Kautilya (4th-3rd century BCE). The perspective here is that of the contemporary democratic west.
  • The core purpose of the state is protection. This view would be shared by everybody, except anarchists, who believe that the protective role of the state is unnecessary or, more precisely, that people can rely on purely voluntary arrangements.
  • Contemporary Somalia shows the horrors that can befall a stateless society. Yet horrors can also befall a society with an over-mighty state. It is evident, because it is the story of post-tribal humanity that the powers of the state can be abused for the benefit of those who control it.
  • ...9 more annotations...
  • In his final book, Power and Prosperity, the late Mancur Olson argued that the state was a “stationary bandit”. A stationary bandit is better than a “roving bandit”, because the latter has no interest in developing the economy, while the former does. But it may not be much better, because those who control the state will seek to extract the surplus over subsistence generated by those under their control.
  • In the contemporary west, there are three protections against undue exploitation by the stationary bandit: exit, voice (on the first two of these, see this on Albert Hirschman) and restraint. By “exit”, I mean the possibility of escaping from the control of a given jurisdiction, by emigration, capital flight or some form of market exchange. By “voice”, I mean a degree of control over, the state, most obviously by voting. By “restraint”, I mean independent courts, division of powers, federalism and entrenched rights.
  • defining what a democratic state, viewed precisely as such a constrained protective arrangement, is entitled to do.
  • There exists a strand in classical liberal or, in contemporary US parlance, libertarian thought which believes the answer is to define the role of the state so narrowly and the rights of individuals so broadly that many political choices (the income tax or universal health care, for example) would be ruled out a priori. In other words, it seeks to abolish much of politics through constitutional restraints. I view this as a hopeless strategy, both intellectually and politically. It is hopeless intellectually, because the values people hold are many and divergent and some of these values do not merely allow, but demand, government protection of weak, vulnerable or unfortunate people. Moreover, such values are not “wrong”. The reality is that people hold many, often incompatible, core values. Libertarians argue that the only relevant wrong is coercion by the state. Others disagree and are entitled to do so. It is hopeless politically, because democracy necessitates debate among widely divergent opinions. Trying to rule out a vast range of values from the political sphere by constitutional means will fail. Under enough pressure, the constitution itself will be changed, via amendment or reinterpretation.
  • So what ought the protective role of the state to include? Again, in such a discussion, classical liberals would argue for the “night-watchman” role. The government’s responsibilities are limited to protecting individuals from coercion, fraud and theft and to defending the country from foreign aggression. Yet once one has accepted the legitimacy of using coercion (taxation) to provide the goods listed above, there is no reason in principle why one should not accept it for the provision of other goods that cannot be provided as well, or at all, by non-political means.
  • Those other measures would include addressing a range of externalities (e.g. pollution), providing information and supplying insurance against otherwise uninsurable risks, such as unemployment, spousal abandonment and so forth. The subsidisation or public provision of childcare and education is a way to promote equality of opportunity. The subsidisation or public provision of health insurance is a way to preserve life, unquestionably one of the purposes of the state. Safety standards are a way to protect people against the carelessness or malevolence of others or (more controversially) themselves. All these, then, are legitimate protective measures. The more complex the society and economy, the greater the range of the protections that will be sought.
  • What, then, are the objections to such actions? The answers might be: the proposed measures are ineffective, compared with what would happen in the absence of state intervention; the measures are unaffordable and might lead to state bankruptcy; the measures encourage irresponsible behaviour; and, at the limit, the measures restrict individual autonomy to an unacceptable degree. These are all, we should note, questions of consequences.
  • The vote is more evenly distributed than wealth and income. Thus, one would expect the tenor of democratic policymaking to be redistributive and so, indeed, it is. Those with wealth and income to protect will then make political power expensive to acquire and encourage potential supporters to focus on common enemies (inside and outside the country) and on cultural values. The more unequal are incomes and wealth and the more determined are the “haves” to avoid being compelled to support the “have-nots”, the more politics will take on such characteristics.
  • In the 1970s, the view that democracy would collapse under the weight of its excessive promises seemed to me disturbingly true. I am no longer convinced of this: as Adam Smith said, “There is a great deal of ruin in a nation”. Moreover, the capacity for learning by democracies is greater than I had realised. The conservative movements of the 1980s were part of that learning. But they went too far in their confidence in market arrangements and their indifference to the social and political consequences of inequality. I would support state pensions, state-funded health insurance and state regulation of environmental and other externalities. I am happy to debate details. The ancient Athenians called someone who had a purely private life “idiotes”. This is, of course, the origin of our word “idiot”. Individual liberty does indeed matter. But it is not the only thing that matters. The market is a remarkable social institution. But it is far from perfect. Democratic politics can be destructive. But it is much better than the alternatives. Each of us has an obligation, as a citizen, to make politics work as well as he (or she) can and to embrace the debate over a wide range of difficult choices that this entails.
  •  
    What is the role of the state?
Weiye Loh

The boy who knew too much: a child prodigy: Time Magazine, Zuckerberg and Assange. - 0 views

  • It had invited readers from all over the world, to vote on whom they thought should be Time Magazine's Person of the Year, 2010. The world duly voted. They chose Julian Assange, with 382,026 votes, far outpacing the runner up, Recep Erdogan, at 233,639 votes. One would, therefore have thought, were Time Magazine a democracy, that Julian Assange would have won. He didn't. Mark Zuckerberg, with 18,353 votes, won. To be fair, Time Magazine did note, in its pages, that the final decision rests with its editors - however it does make clear that Time Magazine's competition is not a democratic one. The voice of the world's people is not one that Time Magazine listens to, on this issue, at least.
  • Time Magazine is, of course in a difficult position. If it had gone with the world's voters and put Julian Assange on its cover, as Time's Person of the Year, it would have offended the US government, with whom Assange is presently battling. So, Time may have felt it had no option but to bury Assange's result, by putting him in third position, as they did. Yet, that opens them up for another problem: offending those very same voters. About one quarter of all the votes cast, were for Assange. That suggests that, most probably, one quarter of its readers support Assange as the top choice. Those people have been snubbed. Their views have been dismissed. That could have repercussions for the sales of Time Magazine since there is one thing that is very obvious about Assange's supporters: they are very passionate. So, Time Magazine, could now be in the position of having irked many passionate people, who are likely to do word of mouth damage to Time Magazine.
  • If Time does not wish to be held to the views of its readers, then it should not even have a poll, on the matter of whom should be Time's Person of the Year. It is a kind of faux engagement and fake democracy, to do so.
  • ...1 more annotation...
  • Mark Zuckerberg does not appear to be that popular a figure. When you consider that Facebook has 600 million users, 18,353 votes for him, on Time's poll seems mighty few. Whatever Facebook users think of Facebook, they don't go out of their way to be supportive of Zuckerberg, as a public figure
Weiye Loh

Bankers, Buyouts & Billionaires: Why Big Herba's Research Deficit Isn't About... - 0 views

  • A skeptic challenges a natural health product for the lack of an evidentiary base.  A proponent of that product responds that the skeptic has made a logical error – an absence of evidence is not evidence of absence, and in such a scenario it’s not unreasonable to rely on patient reporting and traditional uses as a guide. The skeptic chimes back with a dissertation on the limits of anecdotal evidence and arguments from antiquity — especially when the corresponding pharma products have a data trail supporting their safety and efficacy. The proponent responds that it’s unfair to hold natural health products to the same evidentiary standard, because only pharma has the money to fund proper research, and they only do so for products they can patent. You can’t patent nature, so no research into natural health products gets done.
  • look here, here, and here for recent examples
  • natural health industry isn’t rich enough to sustain proper research.  Is that true? Natural health, by the numbers On the surface, it certainly wouldn’t appear so. While the industry can be difficult to get a bead on – due both to differing definitions of what it includes (organic foods? natural toothpaste?), and the fact that many of the key players are private companies that don’t report revenues – by any measure it’s sizable. A survey by the University of Guelph  references KPMG estimates that the Natural Health Products sector in Canada grew from $1.24B in 2000 to $1.82B in 2006 – a growth rate that would bring the market to about $2.5B today.   Figures from the Nutrition Business Journal quoted in the same survey seem to agree, suggesting Canada is 3% of a global “supplements” (herbal, homeopathy, vitamins) market that was $68B globally in 2006 and growing at 5% a year – bringing it to perhaps $85B today. Figures from various sources quoted in a recent Health Canada report support these estimates.
  • ...4 more annotations...
  • While certainly not as big as the ($820B) pharmaceutical industry, $85B is still an awful lot of money, and it’s hard to imagine it not being enough to carve out a research budget from. Yet research isn’t done by entire industries, but by one tier of the value chain — the companies that manufacture and distribute the products.  If they’re not big enough to fund the type of research skeptics are looking for, it won’t be done, so let’s consider some of the bigger players before we make that call.
  • French giant Boiron (EPA:BOI) is by far the largest distributor of natural health products in Canada – they’re responsible for nearly 4000 (15%) of the 26,000 products approved by Health Canada’s Natural Health Products Directorate. They’re also one of largest natural health products companies globally, with 2010 revenues of €520M ($700M CAD) – a size achieved not just through the success of killer products like Oscillococcinum, but also through acquisitions. In recent years, the company has acquired both its main French rival Dolisos (giving them 90% of the French homeopathy market) and the largest homeopathy company in Belgium, Unda. So this is a big company that’s prepared to spend money to get even bigger. What about spending some of that money on research?  Well ostensibly it’s a priority: “Since 2005, we have devoted a growing level of resources to develop research,” they proclaim in the opening pages of their latest annual report, citing 70 in-progress research projects. Yet the numbers tell a different story – €4.2M in R&D expenditures in 2009, just 0.8% of revenues.
  • To put that in perspective, consider that in the same year, GlaxoSmithKline spent 14% of its revenues on R&D, Pfizer spent 15%, and Merck spent a whopping 21%.
  • But if Boiron’s not spending like pharma on research, there’s one line item where they do go toe to toe: Marketing. The company spent €114M – a full 21% of revenues on marketing in 2009. By contrast, GSK, Pfizer and Merck reported 33%, 29%, and 30% of revenues respectively on their “Selling, General, and Administrative” (SG&A) line – which includes not just sales & marketing expenses, but also executive salaries, support staff, legal, rent, utilities, and other overhead costs. Once those are subtracted out, it’s likely that Boiron spends at least as much of its revenues on marketing as Big Pharma.
Weiye Loh

TODAYonline | Commentary | Science, shaken, must take stock - 0 views

  • Japan's part-natural, part-human disaster is an extraordinary event. As well as dealing with the consequences of an earthquake and tsunami, rescuers are having to evacuate thousands of people from the danger zone around Fukushima. In addition, the country is blighted by blackouts from the shutting of 10 or more nuclear plants. It is a textbook case of how technology can increase our vulnerability through unintended side-effects.
  • Yet there had been early warnings from scientists. In 2006, Professor Katsuhiko Ishibashi resigned from a Japanese nuclear power advisory panel, saying the policy of building in earthquake zones could lead to catastrophe, and that design standards for proofing them against damage were too lax. Further back, the seminal study of accidents in complex technologies was Professor Charles Perrow's Normal Accidents, published in 1984
  • Things can go wrong with design, equipment, procedures, operators, supplies and the environment. Occasionally two or more will have problems simultaneously; in a complex technology such as a nuclear plant, the potential for this is ever-present.
  • ...9 more annotations...
  • in complex systems, "no matter how effective conventional safety devices are, there is a form of accident that is inevitable" - hence the term "normal accidents".
  • system accidents occur with many technologies: Take the example of a highway blow-out leading to a pile-up. This may have disastrous consequences for those involved but cannot be described as a disaster. The latter only happens when the technologies involved have the potential to affect many innocent bystanders. This "dread factor" is why the nuclear aspect of Japan's ordeal has come to dominate headlines, even though the tsunami has had much greater immediate impact on lives.
  • It is simply too early to say what precisely went wrong at Fukushima, and it has been surprising to see commentators speak with such speed and certainty. Most people accept that they will only ever have a rough understanding of the facts. But they instinctively ask if they can trust those in charge and wonder why governments support particular technologies so strongly.
  • Industry and governments need to be more straightforward with the public. The pretence of knowledge is deeply unscientific; a more humble approach where officials are frank about the unknowns would paradoxically engender greater trust.
  • Likewise, nuclear's opponents need to adopt a measured approach. We need a fuller democratic debate about the choices we are making. Catastrophic potential needs to be a central criterion in decisions about technology. Advice from experts is useful but the most significant questions are ethical in character.
  • If technologies can potentially have disastrous effects on large numbers of innocent bystanders, someone needs to represent their interests. We might expect this to be the role of governments, yet they have generally become advocates of nuclear power because it is a relatively low-carbon technology that reduces reliance on fossil fuels. Unfortunately, this commitment seems to have reduced their ability to be seen to act as honest brokers, something acutely felt at times like these, especially since there have been repeated scandals in Japan over the covering-up of information relating to accidents at reactors.
  • Post Fukushima, governments in Germany, Switzerland and Austria already appear to be shifting their policies. Rational voices, such as the Britain's chief scientific adviser John Beddington, are saying quite logically that we should not compare the events in Japan with the situation in Britain, which does not have the same earthquake risk. Unfortunately, such arguments are unlikely to prevail in the politics of risky technologies.
  • firms and investors involved in nuclear power have often failed to take regulatory and political risk into account; history shows that nuclear accidents can lead to tighter regulations, which in turn can increase nuclear costs. Further ahead, the proponents of hazardous technologies need to bear the full costs of their products, including insurance liabilities and the cost of independent monitoring of environmental and health effects. As it currently stands, taxpayers would pay for any future nuclear incident.
  • Critics of technology are often dubbed in policy circles as anti-science. Yet critical thinking is central to any rational decision-making process - it is less scientific to support a technology uncritically. Accidents happen with all technologies, and are regrettable but not disastrous so long as the technology does not have catastrophic potential; this raises significant questions about whether we want to adopt technologies that do have such potential.
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

Australian media take note: the BBC understands balance in climate change coverage - 0 views

  • It is far from accurate to refer to “science” as a single entity (as I just have). Many arguments that dispute the consensus about climate change being the result of man made activity talk about “scientists” as though they are “all in it together” and “supporting each other”. This implies some grand conspiracy. But science is a competition, not a collusion. If anything they are all against each other. No given person or research team has the whole picture of climate science. The range of scientific disciplines that work in this area is vast. Indeed there are few areas of science which do not potentially have something to contribute to the area. But put a geologist and a geneticist in a room together and they can barely speak the same language. Far from some great conspiracy, the fact that the Intergovernmental Panel on Climate Change has come to a consensus about climate change is truly extraordinary.
  • So the report is recommending that journalists do what they should always have done – investigate and verify. By all means ask another expert’s point of view, determine whether the latest finding is in fact good science or what its implications are. But we need to move away from the idea of “balance” between those who believe it is all a big conspiracy and those who have done some work and looked at the actual evidence. The report concludes that in particular the BBC must take special care to continue efforts to ensure viewers are able to distinguish well-established fact from opinion on scientific issues, and to communicate this distinction clearly to the audience. In other words, to remember that the plural of anecdote is not data.
  •  
    On Wednesday the BBC Trust released their report "Review of impartiality and accuracy of the BBC's coverage of science". The report has resulted in the BBC deciding to reflect scientific consensus about climate change in their coverage of the issue. As a science communicator I applaud this decision. I understand and support the necessity to provide equal voice to political parties during an election campaign (indeed, I have done this, as an election occurred during my two years writing science for the ABC). But science is not politics. And scientists are not politicians. Much of the confusion about the climate change debate stems from a deep ignorance among the general population about how science works. And believe me this really is something "science" as an entity needs to address.
Weiye Loh

The Black Swan of Cairo | Foreign Affairs - 0 views

  • It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability "tail risks" to disappear from policymakers' fields of observation.
  • Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups. And it is the same misperception of the properties of natural systems that led to both the economic crisis of 2007-8 and the current turmoil in the Arab world. The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying.
  • ...21 more annotations...
  • Just as a robust economic system is one that encourages early failures (the concepts of "fail small" and "fail fast"), the U.S. government should stop supporting dictatorial regimes for the sake of pseudostability and instead allow political noise to rise to the surface. Making an economy robust in the face of business swings requires allowing risk to be visible; the same is true in politics.
  • Both the recent financial crisis and the current political crisis in the Middle East are grounded in the rise of complexity, interdependence, and unpredictability. Policymakers in the United Kingdom and the United States have long promoted policies aimed at eliminating fluctuation -- no more booms and busts in the economy, no more "Iranian surprises" in foreign policy. These policies have almost always produced undesirable outcomes. For example, the U.S. banking system became very fragile following a succession of progressively larger bailouts and government interventions, particularly after the 1983 rescue of major banks (ironically, by the same Reagan administration that trumpeted free markets). In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of "the great moderation" in 2004.
  • Washington stabilized the market with bailouts and by allowing certain companies to grow "too big to fail." Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • The foreign policy equivalent is to support the incumbent no matter what. And just as banks took wild risks thanks to Greenspan's implicit insurance policy, client governments such as Hosni Mubarak's in Egypt for years engaged in overt plunder thanks to similarly reliable U.S. support.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion.
  • In the realm of economics, price controls are designed to constrain volatility on the grounds that stable prices are a good thing. But although these controls might work in some rare situations, the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued. The risks of a dictatorship, no matter how seemingly stable, are no different, in the long run, from those of an artificially controlled price.
  • Such attempts to institutionally engineer the world come in two types: those that conform to the world as it is and those that attempt to reform the world. The nature of humans, quite reasonably, is to intervene in an effort to alter their world and the outcomes it produces. But government interventions are laden with unintended -- and unforeseen -- consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of "experts" in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile. Due to the complexity of markets, intricate regulations simply serve to generate fees for lawyers and profits for sophisticated derivatives traders who can build complicated financial products that skirt those regulations.
  • The life of a turkey before Thanksgiving is illustrative: the turkey is fed for 1,000 days and every day seems to confirm that the farmer cares for it -- until the last day, when confidence is maximal. The "turkey problem" occurs when a naive analysis of stability is derived from the absence of past variations. Likewise, confidence in stability was maximal at the onset of the financial crisis in 2007.
  • The turkey problem for humans is the result of mistaking one environment for another. Humans simultaneously inhabit two systems: the linear and the complex. The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as "tipping points." Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans' sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities. Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.
  • The system is responsible, not the components. But after the financial crisis of 2007-8, many people thought that predicting the subprime meltdown would have helped. It would not have, since it was a symptom of the crisis, not its underlying cause. Likewise, Obama's blaming "bad intelligence" for his administration's failure to predict the crisis in Egypt is symptomatic of both the misunderstanding of complex systems and the bad policies involved.
  • Obama's mistake illustrates the illusion of local causal chains -- that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved. As such, blaming the CIA is as foolish as funding it to forecast such events. Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.
  • Most explanations being offered for the current turmoil in the Middle East follow the "catalysts as causes" confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships. But Bahrain and Libya are countries with high gdps that can afford to import grain and other commodities. Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied -- what physicists call "percolation theory," in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
  • Humans fear randomness -- a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced fitness and increased chances of survival, it can have the reverse effect in today's complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of "great moderation." This is not to say that any and all volatility should be embraced. Insurance should not be banned, for example.
  • But alongside the "catalysts as causes" confusion sit two mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing). This leads to the desire to impose man-made solutions
  • Variation is information. When there is no variation, there is no information. This explains the CIA's failure to predict the Egyptian revolution and, a generation before, the Iranian Revolution -- in both cases, the revolutionaries themselves did not have a clear idea of their relative strength with respect to the regime they were hoping to topple. So rather than subsidize and praise as a "force for stability" every tin-pot potentate on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be in the open, as Italy and Lebanon both show in different ways, creates the stability of small jumps.
  • As Seneca wrote in De clementia, "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all . . . just as trees that have been trimmed throw out again countless branches." The imposition of peace through repeated punishment lies at the heart of many seemingly intractable conflicts, including the Israeli-Palestinian stalemate. Furthermore, dealing with seemingly reliable high-level officials rather than the people themselves prevents any peace treaty signed from being robust. The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty. Treaties that are negotiated with the consent of a broad swath of the populations on both sides of a conflict tend to survive. Just as no central bank is powerful enough to dictate stability, no superpower can be powerful enough to guarantee solid peace alone.
  • As Jean-Jacques Rousseau put it, "A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom." With freedom comes some unpredictable fluctuation. This is one of life's packages: there is no freedom without noise -- and no stability without volatility.∂
Weiye Loh

Probing the dark web | plus.maths.org - 0 views

  • We spoke to Hsinchun Chen from the University of Arizona, who is involved with the dark web terrorism research project which develops automated tools to collect and analyse terrorist content from the Internet. We also spoke to Fillipo Menzcer from Indiana University about Truthy, a free tool for analysing how information spreads on Twitter that has been useful in spotting astroturfing.Listen to "Probing the dark web"
  •  
    Information on the web can help us catch terrorists and criminals and it can also identify a practice called astroturfing - creating the false impression that there's huge grassroots support for some cause or person using false user accounts. It's a big problem in elections and other types of political conflicts.
Weiye Loh

The Real Hoax Was Climategate | Media Matters Action Network - 0 views

  • Sen. Jim Inhofe's (R-OK) biggest claim to fame has been his oft-repeated line that global warming is "the greatest hoax ever perpetrated on the American people."
  • In 2003, he conceded that the earth was warming, but denied it was caused by human activity and suggested that "increases in global temperatures may have a beneficial effect on how we live our lives."
  • In 2009, however, he appeared on Fox News to declare that the earth was actually cooling, claiming "everyone understands that's the case" (they don't, because it isn't).
  • ...7 more annotations...
  • nhofe's battle against climate science kicked into overdrive when a series of illegally obtained emails surfaced from the Climatic Research Unit at East Anglia University. 
  • When the dubious reports surfaced about flawed science, manipulated data, and unsubstantiated studies, Inhofe was ecstatic.  In March, he viciously attacked former Vice President Al Gore for defending the science behind climate change
  • Unfortunately for Senator Inhofe, none of those things are true.  One by one, the pillars of evidence supporting the alleged "scandals" have shattered, causing the entire "Climategate" storyline to come crashing down. 
  • a panel established by the University of East Anglia to investigate the integrity of the research of the Climatic Research Unit wrote: "We saw no evidence of any deliberate scientific malpractice in any of the work of the Climatic Research Unit and had it been there we believe that it is likely that we would have detected it."
  • Responding to allegations that Dr. Michael Mann tampered with scientific evidence, Pennsylvania State University conducted a thorough investigation. It concluded: "The Investigatory Committee, after careful review of all available evidence, determined that there is no substance to the allegation against Dr. Michael E. Mann, Professor, Department of Meteorology, The Pennsylvania State University.  More specifically, the Investigatory Committee determined that Dr. Michael E. Mann did not engage in, nor did he participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research, or other scholarly activities."
  • London's Sunday Times retracted its story, echoed by dozens of outlets, that an IPCC issued an unsubstantiated report claiming 40% of the Amazon rainforest was endangered due to changing rainfall patterns.  The Times wrote: "In fact, the IPCC's Amazon statement is supported by peer-reviewed scientific evidence. In the case of the WWF report, the figure had, in error, not been referenced, but was based on research by the respected Amazon Environmental Research Institute (IPAM) which did relate to the impact of climate change."
  • The Times also admitted it misrepresented the views of Dr. Simon Lewis, a Royal Society research fellow at the University of Leeds, implying he agreed with the article's false premise and believed the IPCC should not utilize reports issued by outside organizations.  In its retraction, the Times was forced to admit: "Dr Lewis does not dispute the scientific basis for both the IPCC and the WWF reports," and, "We accept that Dr Lewis holds no such view... A version of our article that had been checked with Dr Lewis underwent significant late editing and so did not give a fair or accurate account of his views on these points. We apologise for this."
  •  
    The Real Hoax Was Climategate July 02, 2010 1:44 pm ET by Chris Harris
Weiye Loh

Wk 4 Online censorship & digital access: Mormon Church Attacks Wikileaks - 6 views

WIKILEAK RELEASES SECRET CHURCH DOCUMENTS! The First Link is an article regarding Wikileaks releasing a 'copyrighted' and confidential Church document of the Mormons (also known as the Church of J...

Mormons Scientology Wikileaks Copyright Censorship

1 - 20 of 150 Next › Last »
Showing 20 items per page