Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged tools

Rss Feed Group items tagged

Jody Poh

Web tools help protect human rights activists - 7 views

1) I think it depends what is being censored. I think things like opinions should not be censored because it is violating the natural rights of a human. However, one can argue that online censors...

"Online censorship" "digital rights" 'Internet privacy tools"

Weiye Loh

In Wired Singapore Classrooms, Cultures Clash Over Web 2.0 - Technology - The Chronicle... - 0 views

  • Dozens of freshmen at Singapore Management University spent one evening last week learning how to "wiki," or use the software that lets large numbers of people write and edit class projects online. Though many said experiencing a public editing process similar to that of Wikipedia could prove valuable, some were wary of the collaborative tool, with its public nature and the ability to toss out or revise the work of their classmates.
  • It puts students in the awkward position of having to publicly correct a peer, which can cause the corrected person to lose face.
  • "You have to be more aware of others and have a sensitivity to others."
  • ...8 more annotations...
  • While colleges have been trumpeting the power of social media as an educational tool, here in Asia, going public with classwork runs counter to many cultural norms, surprising transplanted professors and making some students a little uneasy.
  • Publicly oriented Web 2.0 tools, like wikis, for instance, run up against ideas about how one should treat others in public. "People were very reluctant to edit things that other people had posted," said American-trained C. Jason Woodard, an assistant professor of information systems who started the wiki project two years ago. "I guess out of deference. People were very careful to not want to edit their peers. Getting people out of that mind-set has been a real challenge."
  • Students are also afraid of embarrassing themselves. Some privately expressed concern to me about putting unfinished work out on the Web for the world to see, as the assignment calls for them to do
  • faced hesitancy when asking students to use social-media tools for class projects. Few students seemed to freely post to blogs or Twitter, electing instead to communicate using Facebook accounts with the privacy set so that only close friends could see them
  • In a small country like Singapore, the traditional face-to-face network still reigns supreme. Members of a network are extremely loyal to that network, and if you are outside of it, a lot of times you aren't even given the time of day.
  • In fact, Singapore's future depends on technology and innovation at least according to its leaders, who have worked for years to position the country as friendly to the foreign investment that serves as its lifeblood. The city-state literally has no natural resources except its people, who it hopes to turn into "knowledge workers" (a buzzword I heard many times during my visit).
  • Yet this is a culture that many here describe as conservative, where people are not known for pushing boundaries. That was the first impression that Giorgos Cheliotis had when he first arrived to teach in Singapore several years ago from his native Greece.
  • he suspects they may be more comfortable because they are seniors, and because they feel that it has been assigned, and so they must.
  •  
    In Wired Singapore Classrooms, Cultures Clash Over Web 2.0
Jude John

What's so Original in Academic Research? - 26 views

Thanks for your comments. I may have appeared to be contradictory, but what I really meant was that ownership of IP should not be a motivating factor to innovate. I realise that in our capitalistic...

Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Measuring Social Media: Who Has Access to the Firehose? - 0 views

  • The question that the audience member asked — and one that we tried to touch on a bit in the panel itself — was who has access to this raw data. Twitter doesn’t comment on who has full access to its firehose, but to Weil’s credit he was at least forthcoming with some of the names, including stalwarts like Microsoft, Google and Yahoo — plus a number of smaller companies.
  • In the case of Twitter, the company offers free access to its API for developers. The API can provide access and insight into information about tweets, replies and keyword searches, but as developers who work with Twitter — or any large scale social network — know, that data isn’t always 100% reliable. Unreliable data is a problem when talking about measurements and analytics, where the data is helping to influence decisions related to social media marketing strategies and allocations of resources.
  • One of the companies that has access to Twitter’s data firehose is Gnip. As we discussed in November, Twitter has entered into a partnership with Gnip that allows the social data provider to resell access to the Twitter firehose.This is great on one level, because it means that businesses and services can access the data. The problem, as noted by panelist Raj Kadam, the CEO of Viralheat, is that Gnip’s access can be prohibitively expensive.
  • ...3 more annotations...
  • The problems with reliable access to analytics and measurement information is by no means limited to Twitter. Facebook data is also tightly controlled. With Facebook, privacy controls built into the API are designed to prevent mass data scraping. This is absolutely the right decision. However, a reality of social media measurement is that Facebook Insights isn’t always reachable and the data collected from the tool is sometimes inaccurate.It’s no surprise there’s a disconnect between the data that marketers and community managers want and the data that can be reliably accessed. Twitter and Facebook were both designed as tools for consumers. It’s only been in the last two years that the platform ecosystem aimed at serving large brands and companies
  • The data that companies like Twitter, Facebook and Foursquare collect are some of their most valuable assets. It isn’t fair to expect a free ride or first-class access to the data by anyone who wants it.Having said that, more transparency about what data is available to services and brands is needed and necessary.We’re just scraping the service of what social media monitoring, measurement and management tools can do. To get to the next level, it’s important that we all question who has access to the firehose.
  • We Need More Transparency for How to Access and Connect with Data
juliet huang

tools to live forever? - 1 views

  •  
    According to a news story on nanotechnology, in the future, the wealthy will be able to make use of nanotechnology to modify parts of their existing or future genetic heritage, ie they can alter body parts in non-invasive procedures, or modify future children's anomalies. http://www.heraldsun.com.au/business/fully-frank/the-tools-to-live-forever/story-e6frfinf-1225791751968 these will then help them evolve into a different species, a better species. ethical questions: most of the issues we've talked about in ethics are at the macro level, perpetuating a social group's agenda. however, biotechnology has the potential to make this divide a reality. it's no longer an ethical question but it has the power to make what we discuss in class a reality. to frame it as an ethical perspective, who gets to decide how is the power evenly distributed? power will always be present behind the use of technologies, but who will decide how this technology is used, and for whose good? and if its for a larger good, then, who can moderate this technology usage to ensure all social actors are represented?
Weiye Loh

Drone journalism takes off - ABC News (Australian Broadcasting Corporation) - 0 views

  • Instead of acquiring military-style multi-million dollar unmanned aerial vehicles the size of small airliners, the media is beginning to go micro, exploiting rapid advances in technology by deploying small toy-like UAVs to get the story.
  • Last November, drone journalism hit the big time after a Polish activist launched a small craft with four helicopter-like rotors called a quadrocopter. He flew the drone low over riot police lines to record a violent demonstration in Warsaw. The pictures were extraordinarily different from run-of-the-mill protest coverage.Posted online, the images went viral. More significantly, this birds-eye view clip found its way onto the bulletins and web pages of mainstream media.
  • Drone Journalism Lab, a research project to determine the viability of remote airborne media.
  •  
    Drones play an increasing and controversial role in modern warfare. From Afghanistan and Pakistan to Iran and Yemen, they have become a ubiquitous symbol of Washington's war on terrorism. Critics point to the mounting drone-induced death toll as evidence that machines, no matter how sophisticated, cannot discriminate between combatants and innocent bystanders. Now drones are starting to fly into a more peaceful, yet equally controversial role in the media. Rapid technological advances in low-cost aerial platforms herald the age of drone journalism. But it will not be all smooth flying: this new media tool can expect to be buffeted by the issues of safety, ethics and legality.
Weiye Loh

Probing the dark web | plus.maths.org - 0 views

  • We spoke to Hsinchun Chen from the University of Arizona, who is involved with the dark web terrorism research project which develops automated tools to collect and analyse terrorist content from the Internet. We also spoke to Fillipo Menzcer from Indiana University about Truthy, a free tool for analysing how information spreads on Twitter that has been useful in spotting astroturfing.Listen to "Probing the dark web"
  •  
    Information on the web can help us catch terrorists and criminals and it can also identify a practice called astroturfing - creating the false impression that there's huge grassroots support for some cause or person using false user accounts. It's a big problem in elections and other types of political conflicts.
Weiye Loh

Truthy - 0 views

  •  
    Truthy is a research project that helps you understand how memes spread online. We collect tweets from Twitter and analyze them. With our statistics, images, movies, and interactive data, you can explore these dynamic networks. Our first application was the study of astroturf campaigns in elections. Currently, we're extending our focus to several themes. Browse the collection on the Memes page. Check out the Movie tool to browse and create animations of meme networks.
Weiye Loh

Judge dismisses authors' case against Google Books | Internet & Media - CNET News - 0 views

  •  
    "In my view, Google Books provides significant public benefits. It advances the progress of the arts and sciences, while maintaining respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders. It has become an invaluable research tool that permits students, teachers, librarians, and others to more efficiently identify and locate books. It has given scholars the ability, for the first time, to conduct full-text searches of tens of millions of books. It preserves books, in particular out-of-print and old books that have been forgotten in the bowels of libraries, and it gives them new life. It facilitates access to books for print-disabled and remote or underserved populations. It generates new audiences and creates new sources of income for authors and publishers. Indeed, all society benefits."
mingli chng

Ethics: SteveOuting.com - 0 views

shared by mingli chng on 19 Aug 09 - Cached
  •  
    Summary: The article highlights a story that shows how the internet erodes privacy and helps to spread online rumours. Question: With the revolutionay increase in blogs and other tools that make anyone a "creator of information", who guards the gates and prevent violations in privacy and other issues? Problem: The rise of New Media and the ethics of gossip and rumours.
Jude John

Democracy 2.0 Awaits an Upgrade - 3 views

http://www.nytimes.com/2009/09/12/world/americas/12iht-currents.html 1. "President Obama declared during the campaign that "we are the ones we've been waiting for." That messianic phrase held the ...

democrcacy technology

started by Jude John on 14 Sep 09 no follow-up yet
Weiye Loh

POrn is Good! - 20 views

"Also, I do not believe that people are born knowing how to engage in sexual activity. " From a Socratic perspective, knowledge is inherent in us. You just need to ask the right question at the ri...

pornography

juliet huang

Virus as a call for help, as a part of a larger social problem - 7 views

I agree with this view, and I also add on that yes, it is probably more profitable for the capitalist, wired society to continue creating anti-virus programs, open more it repair shops etc, than to...

Virus

Weiye Loh

The Matthew Effect § SEEDMAGAZINE.COM - 0 views

  • For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. —Matthew 25:29
  • Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded
  • Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit. 
  • ...7 more annotations...
  • Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.
  • Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.
  • How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.
  • Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.
  • what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.
  • We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.
  • Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.
  •  
    WHEN IT COMES TO SCIENTIFIC PUBLISHING AND FAME, THE RICH GET RICHER AND THE POOR GET POORER. HOW CAN WE BREAK THIS FEEDBACK LOOP?
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • in the last several months, one name turned up, with uncanny regularity, in the No. 1 spot for each and every term: J. C. Penney. The company bested millions of sites — and not just in searches for dresses, bedding and area rugs. For months, it was consistently at or near the top in searches for “skinny jeans,” “home decor,” “comforter sets,” “furniture” and dozens of other words and phrases, from the blandly generic (“tablecloths”) to the strangely specific (“grommet top curtains”).
  • J. C. Penney even beat out the sites of manufacturers in searches for the products of those manufacturers. Type in “Samsonite carry on luggage,” for instance, and Penney for months was first on the list, ahead of Samsonite.com.
  • the digital age’s most mundane act, the Google search, often represents layer upon layer of intrigue. And the intrigue starts in the sprawling, subterranean world of “black hat” optimization, the dark art of raising the profile of a Web site with methods that Google considers tantamount to cheating.
  • ...8 more annotations...
  • Despite the cowboy outlaw connotations, black-hat services are not illegal, but trafficking in them risks the wrath of Google. The company draws a pretty thick line between techniques it considers deceptive and “white hat” approaches, which are offered by hundreds of consulting firms and are legitimate ways to increase a site’s visibility. Penney’s results were derived from methods on the wrong side of that line, says Mr. Pierce. He described the optimization as the most ambitious attempt to game Google’s search results that he has ever seen.
  • TO understand the strategy that kept J. C. Penney in the pole position for so many searches, you need to know how Web sites rise to the top of Google’s results. We’re talking, to be clear, about the “organic” results — in other words, the ones that are not paid advertisements. In deriving organic results, Google’s algorithm takes into account dozens of criteria, many of which the company will not discuss.
  • But it has described one crucial factor in detail: links from one site to another. If you own a Web site, for instance, about Chinese cooking, your site’s Google ranking will improve as other sites link to it. The more links to your site, especially those from other Chinese cooking-related sites, the higher your ranking. In a way, what Google is measuring is your site’s popularity by polling the best-informed online fans of Chinese cooking and counting their links to your site as votes of approval.
  • But even links that have nothing to do with Chinese cooking can bolster your profile if your site is barnacled with enough of them. And here’s where the strategy that aided Penney comes in. Someone paid to have thousands of links placed on hundreds of sites scattered around the Web, all of which lead directly to JCPenney.com.
  • Who is that someone? A spokeswoman for J. C. Penney, Darcie Brossart, says it was not Penney.
  • “J. C. Penney did not authorize, and we were not involved with or aware of, the posting of the links that you sent to us, as it is against our natural search policies,” Ms. Brossart wrote in an e-mail. She added, “We are working to have the links taken down.”
  • Using an online tool called Open Site Explorer, Mr. Pierce found 2,015 pages with phrases like “casual dresses,” “evening dresses,” “little black dress” or “cocktail dress.” Click on any of these phrases on any of these 2,015 pages, and you are bounced directly to the main page for dresses on JCPenney.com.
  • Some of the 2,015 pages are on sites related, at least nominally, to clothing. But most are not. The phrase “black dresses” and a Penney link were tacked to the bottom of a site called nuclear.engineeringaddict.com. “Evening dresses” appeared on a site called casino-focus.com. “Cocktail dresses” showed up on bulgariapropertyportal.com. ”Casual dresses” was on a site called elistofbanks.com. “Semi-formal dresses” was pasted, rather incongruously, on usclettermen.org.
Weiye Loh

Learn to love uncertainty and failure, say leading thinkers | Edge question | Science |... - 0 views

  • Being comfortable with uncertainty, knowing the limits of what science can tell us, and understanding the worth of failure are all valuable tools that would improve people's lives, according to some of the world's leading thinkers.
  • he ideas were submitted as part of an annual exercise by the web magazine Edge, which invites scientists, philosophers and artists to opine on a major question of the moment. This year it was, "What scientific concept would improve everybody's cognitive toolkit?"
  • the public often misunderstands the scientific process and the nature of scientific doubt. This can fuel public rows over the significance of disagreements between scientists about controversial issues such as climate change and vaccine safety.
  • ...13 more annotations...
  • Carlo Rovelli, a physicist at the University of Aix-Marseille, emphasised the uselessness of certainty. He said that the idea of something being "scientifically proven" was practically an oxymoron and that the very foundation of science is to keep the door open to doubt.
  • "A good scientist is never 'certain'. Lack of certainty is precisely what makes conclusions more reliable than the conclusions of those who are certain: because the good scientist will be ready to shift to a different point of view if better elements of evidence, or novel arguments emerge. Therefore certainty is not only something of no use, but is in fact damaging, if we value reliability."
  • physicist Lawrence Krauss of Arizona State University agreed. "In the public parlance, uncertainty is a bad thing, implying a lack of rigour and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time," he said.
  • however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty, and incorporate it into models, is what makes science quantitative, rather than qualitative. Indeed, no number, no measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies they have, in essence, no meaning."
  • Neil Gershenfeld, director of the Massachusetts Institute of Technology's Centre for Bits and Atoms wants everyone to know that "truth" is just a model. "The most common misunderstanding about science is that scientists seek and find truth. They don't – they make and test models," he said.
  • Building models is very different from proclaiming truths. It's a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don't know, not a weakness to avoid. Bugs are features – violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom."
  • writer and web commentator Clay Shirky suggested that people should think more carefully about how they see the world. His suggestion was the Pareto principle, a pattern whereby the top 1% of the population control 35% of the wealth or, on Twitter, the top 2% of users send 60% of the messages. Sometimes known as the "80/20 rule", the Pareto principle means that the average is far from the middle.It is applicable to many complex systems, "And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world," said Shirky. "We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people. We should stop thinking that the largest future earthquake or market panic will be as large as the largest historical one; the longer a system persists, the likelier it is that an event twice as large as all previous ones is coming."
  • Kevin Kelly, editor-at-large of Wired, pointed to the value of negative results. "We can learn nearly as much from an experiment that does not work as from one that does. Failure is not something to be avoided but rather something to be cultivated. That's a lesson from science that benefits not only laboratory research, but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced."
  • Michael Shermer, publisher of the Skeptic Magazine, wrote about the importance of thinking "bottom up not top down", since almost everything in nature and society happens this way.
  • But most people don't see things that way, said Shermer. "Bottom up reasoning is counterintuitive. This is why so many people believe that life was designed from the top down, and why so many think that economies must be designed and that countries should be ruled from the top down."
  • Roger Schank, a psychologist and computer scientist, proposed that we should all know the true meaning of "experimentation", which he said had been ruined by bad schooling, where pupils learn that scientists conduct experiments and if we copy exactly what they did in our high school labs we will get the results they got. "In effect we learn that experimentation is boring, is something done by scientists and has nothing to do with our daily lives."Instead, he said, proper experiments are all about assessing and gathering evidence. "In other words, the scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained as the result of an experiment. But people who don't see their actions as experiments, and those who don't know how to reason carefully from data, will continue to learn less well from their own experiences than those who do
  • Lisa Randall, a physicist at Harvard University, argued that perhaps "science" itself would be a useful concept for wider appreciation. "The idea that we can systematically understand certain aspects of the world and make predictions based on what we've learned – while appreciating and categorising the extent and limitations of what we know – plays a big role in how we think.
  • "Many words that summarise the nature of science such as 'cause and effect', 'predictions', and 'experiments', as well as words that describe probabilistic results such as 'mean', 'median', 'standard deviation', and the notion of 'probability' itself help us understand more specifically what this means and how to interpret the world and behaviour within it."
Weiye Loh

A DIY Data Manifesto | Webmonkey | Wired.com - 0 views

  • Running a server is no more difficult than starting Windows on your desktop. That’s the message Dave Winer, forefather of blogging and creator of RSS, is trying to get across with his EC2 for Poets project.
  • Winer has put together an easy-to-follow tutorial so you too can set up a Windows-based server running in the cloud. Winer uses Amazon’s EC2 service. For a few dollars a month, Winer’s tutorial can have just about anyone up and running with their own server.
  • but education and empowerment aren’t Winer’s only goals. “I think it’s important to bust the mystique of servers,” says Winer, “it’s essential if we’re going to break free of the ‘corporate blogging silos.’”
  • ...8 more annotations...
  • The corporate blogging silos Winer is thinking of are services like Twitter, Facebook and WordPress. All three have been instrumental in the growth of the web, they make it easy for anyone publish. But they also suffer denial of service attacks, government shutdowns and growing pains, centralized services like Twitter and Facebook are vulnerable. Services wrapped up in a single company are also vulnerable to market whims, Geocities is gone, FriendFeed languishes at Facebook and Yahoo is planning to sell Delicious. A centralized web is brittle web, one that can make our data, our communications tools disappear tomorrow.
  • But the web will likely never be completely free of centralized services and Winer recognizes that. Most people will still choose convenience over freedom. Twitter’s user interface is simple, easy to use and works on half a dozen devices.
  • Winer isn’t the only one who believes the future of the web will be distributed systems that aren’t controlled by any single corporation or technology platform. Microformats founder Tantek Çelik is also working on a distributed publishing system that seeks to retain all the cool features of the social web, but remove the centralized bottleneck.
  • to be free of corporate blogging silos and centralized services the web will need an army of distributed servers run by hobbyists, not just tech-savvy web admins, but ordinary people who love the web and want to experiment.
  • Winer wants to start by creating a loosely coupled, distributed microblogging service like Twitter. “I’m pretty sure we know how to create a micro-blogging community with open formats and protocols and no central point of failure,” he writes on his blog.
  • that means decoupling the act of writing from the act of publishing. The idea isn’t to create an open alternative to Twitter, it’s to remove the need to use Twitter for writing on Twitter. Instead you write with the tools of your choice and publish to your own server.
  • If everyone publishes first to their own server there’s no single point of failure. There’s no fail whale, and no company owns your data. Once the content is on your server you can then push it on to wherever you’d like — Twitter, Tumblr, WordPress of whatever the site du jour is ten years from now.
  • The glue that holds this vision together is RSS. Winer sees RSS as the ideal broadcast mechanism for the distributed web and in fact he’s already using it — Winer has an RSS feed of links that are then pushed on to Twitter.
Weiye Loh

Can Science Be Used As A Diplomatic Tool? : NPR - 0 views

  •  
    Some moon craft house instruments from a handful of countries - an example of international scientific collaboration. But how valuable is science in the diplomatic sphere? Biologist Nina Fedoroff, former science adviser to both Condoleezza Rice and Hillary Clinton, talks about her time in Washington.
1 - 20 of 77 Next › Last »
Showing 20 items per page