Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Software" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
12More

P2P Foundation » Blog Archive » Crowdsourced curation, reputation systems, an... - 0 views

  • A good example of manual curation vs. crowdsourced curation is the competing app markets on the Apple iPhone and Google Android phone operating systems.
  • Apple is a monarchy, albeit with a wise and benevolent king. Android is burgeoning democracy, inefficient and messy, but free. Apple is the last, best example of the Industrial Age and its top-down, mass market/mass production paradigm.
  • They manufacture cool. They rely on “consumers”, and they protect those consumers from too many choices by selecting what is worthy, and what is not.
  • ...8 more annotations...
  • systems that allow crowdsourced judgment to be tweaked, not to the taste of the general mass, which produces lowest common denominator effects, but to people and experts that you can trust for their judgment.
  • these systems are now implemented by Buzz and Digg 4
  • Important for me though, is that they don’t just take your social graph as is, because that mixes many different people for different reasons, but that you can tweak the groups.
  • “This is the problem with the internet! It’s full of crap!” Many would argue that without professional producers, editors, publishers, and the natural scarcity that we became accustomed to, there’s a flood of low-quality material that we can’t possible sift through on our own. From blogs to music to software to journalism, one of the biggest fears of the established order is how to handle the oncoming glut of mediocrity. Who shall tell us The Good from The Bad? “We need gatekeepers, and they need to be paid!”
  • The Internet has enabled us to build our social graph, and in turn, that social graph acts as an aggregate gatekeeper. The better that these systems for crowdsourcing the curation of content become, the more accurate the results will be.
  • This social-graph-as-curation is still relatively new, even by Internet standards. However, with tools like Buzz and Digg 4 (which allows you to see the aggregate ratings for content based on your social graph, and not the whole wide world) this technique is catching up to human publishers fast. For those areas where we don’t have strong social ties, we can count on reputation systems to help us “rate the raters”. These systems allow strangers to rate each other’s content, giving users some idea of who to trust, without having to know them personally. Yelp has a fairly mature reputation system, where locations are rated by users, but the users are rated, in turn, by each other.
  • Reputation systems and the social graph allow us to crowdsource curation.
  • Can you imagine if Apple had to approve your videos for posting on Youtube, where every minute, 24 hours of footage are uploaded? There’s no way humans could keep up! The traditional forms of curation and gatekeeping simply can not scale to meet the increase in production and transmission that the Internet allows. Crowdsourcing is the only curatorial/editorial mechanism that can scale to match the increased ability to produce that the Internet has given us.
  •  
    Crowdsourced curation, reputation systems, and the social graph
12More

In Wired Singapore Classrooms, Cultures Clash Over Web 2.0 - Technology - The Chronicle... - 0 views

  • Dozens of freshmen at Singapore Management University spent one evening last week learning how to "wiki," or use the software that lets large numbers of people write and edit class projects online. Though many said experiencing a public editing process similar to that of Wikipedia could prove valuable, some were wary of the collaborative tool, with its public nature and the ability to toss out or revise the work of their classmates.
  • It puts students in the awkward position of having to publicly correct a peer, which can cause the corrected person to lose face.
  • "You have to be more aware of others and have a sensitivity to others."
  • ...8 more annotations...
  • While colleges have been trumpeting the power of social media as an educational tool, here in Asia, going public with classwork runs counter to many cultural norms, surprising transplanted professors and making some students a little uneasy.
  • Publicly oriented Web 2.0 tools, like wikis, for instance, run up against ideas about how one should treat others in public. "People were very reluctant to edit things that other people had posted," said American-trained C. Jason Woodard, an assistant professor of information systems who started the wiki project two years ago. "I guess out of deference. People were very careful to not want to edit their peers. Getting people out of that mind-set has been a real challenge."
  • Students are also afraid of embarrassing themselves. Some privately expressed concern to me about putting unfinished work out on the Web for the world to see, as the assignment calls for them to do
  • faced hesitancy when asking students to use social-media tools for class projects. Few students seemed to freely post to blogs or Twitter, electing instead to communicate using Facebook accounts with the privacy set so that only close friends could see them
  • In a small country like Singapore, the traditional face-to-face network still reigns supreme. Members of a network are extremely loyal to that network, and if you are outside of it, a lot of times you aren't even given the time of day.
  • In fact, Singapore's future depends on technology and innovation at least according to its leaders, who have worked for years to position the country as friendly to the foreign investment that serves as its lifeblood. The city-state literally has no natural resources except its people, who it hopes to turn into "knowledge workers" (a buzzword I heard many times during my visit).
  • Yet this is a culture that many here describe as conservative, where people are not known for pushing boundaries. That was the first impression that Giorgos Cheliotis had when he first arrived to teach in Singapore several years ago from his native Greece.
  • he suspects they may be more comfortable because they are seniors, and because they feel that it has been assigned, and so they must.
  •  
    In Wired Singapore Classrooms, Cultures Clash Over Web 2.0
4More

Apples and PCs: Who innovates more, Apple or HP? | The Economist - 1 views

  • In terms of processing power, speed, memory, and so on, how do Macs and PCs actually compare? And does Apple innovate in terms of basic hardware quality as often or less often than the likes of HP, Compaq, and other producers? This question is of broader interest from an economist's point of view because it also has to do with the age-old question of whether competition or monopoly is a better spur to innovation. In a certain sense, Apple is a monopolist, and PC makers are in a more competitive market. (I say in a certain sense because obviously Macs and PCs are substitutes; it's just that they're more imperfect substitutes than two PCs are for each other, in part because of software migration issues.)
  • Schumpeter argued long back that because a monopolist reaps the full reward from innovation, such firms would be more innovative. The case for patents relies in part on a version of this argument: companies are given monopoly rights over a new product for a period of time in order for them to be able to recoup the costs of innovation; without such protection, it is argued, they would not find it beneficial to innovate in the first place.
  • others have argued that competition spurs innovation by giving firms a way to differentiate themselves from their competitors (in a way, creating something new gives a company a temporary, albeit brief, "monopoly")
  •  
    Who innovates more, Apple or HP?
6More

Short Sharp Science: Computer beats human at Japanese chess for first time - 0 views

  • A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time.
  • computers have been beating humans at western chess for years, and when IBM's Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn't happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.
  • Japan's national broadcaster, NHK, reported that Akara "aggressively pursued Shimizu from the beginning". It's the first time a computer has beaten a professional human player.
  • ...2 more annotations...
  • The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu's defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.
  • Perhaps the association doesn't mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player.
  •  
    Computer beats human at Japanese chess for first time
6More

The Data-Driven Life - NYTimes.com - 0 views

  • Humans make errors. We make errors of fact and errors of judgment. We have blind spots in our field of vision and gaps in our stream of attention.
  • These weaknesses put us at a disadvantage. We make decisions with partial information. We are forced to steer by guesswork. We go with our gut.
  • Others use data.
  • ...3 more annotations...
  • Others use data. A timer running on Robin Barooah’s computer tells him that he has been living in the United States for 8 years, 2 months and 10 days. At various times in his life, Barooah — a 38-year-old self-employed software designer from England who now lives in Oakland, Calif. — has also made careful records of his work, his sleep and his diet.
  • A few months ago, Barooah began to wean himself from coffee. His method was precise. He made a large cup of coffee and removed 20 milliliters weekly. This went on for more than four months, until barely a sip remained in the cup. He drank it and called himself cured. Unlike his previous attempts to quit, this time there were no headaches, no extreme cravings. Still, he was tempted, and on Oct. 12 last year, while distracted at his desk, he told himself that he could probably concentrate better if he had a cup. Coffee may have been bad for his health, he thought, but perhaps it was good for his concentration. Barooah wasn’t about to try to answer a question like this with guesswork. He had a good data set that showed how many minutes he spent each day in focused work. With this, he could do an objective analysis. Barooah made a chart with dates on the bottom and his work time along the side. Running down the middle was a big black line labeled “Stopped drinking coffee.” On the left side of the line, low spikes and narrow columns. On the right side, high spikes and thick columns. The data had delivered their verdict, and coffee lost.
  • “People have such very poor sense of time,” Barooah says, and without good time calibration, it is much harder to see the consequences of your actions. If you want to replace the vagaries of intuition with something more reliable, you first need to gather data. Once you know the facts, you can live by them.
7More

Roger Pielke Jr.'s Blog: Core Questions in the Governance of Innovation - 0 views

  • Today's NYT has a couple interesting articles about technological innovations that we may not want, and that we may wish to regulate in some manner, formally or informally.  These technologies suggest some core questions that lie at the heart of the management of innovation.
  • The first article discusses Google' Goggles which is an application allows people to search the internet based on an image taken by a smartphone.  Google has decided not to allow this technology to include face recognition in its software, even though people have requested it.
  • Google could have put face recognition into the Goggles application; indeed, many users have asked for it. But Google decided against it because smartphones can be used to take pictures of individuals without their knowledge, and a face match could retrieve all kinds of personal information — name, occupation, address, workplace.
  • ...4 more annotations...
  • “It was just too sensitive, and we didn’t want to go there,” said Eric E. Schmidt, the chief executive of Google. “You want to avoid enabling stalker behavior.”
  • The second article focuses on innovations in high frequency trading in financial markets, which bears some responsibility for the so-called "flash crash" of May 6th last year, in which the DJIA plunged more than 700 points in just minutes.
  • One debate has focused on whether some traders are firing off fake orders thousands of times a second to slow down exchanges and mislead others. Michael Durbin, who helped build high-frequency trading systems for companies like Citadel and is the author of the book “All About High-Frequency Trading,” says that most of the industry is legitimate and benefits investors. But, he says, the rules need to be strengthened to curb some disturbing practices.
  • This situation raises what I see to be core questions in the governance of innovation -- to what degree can innovation be shaped for achieving intended purposes? and, To what degree can the consequences of innovation be anticipated?
27More

IPhone and Android Apps Breach Privacy - WSJ.com - 0 views

  • Few devices know more personal details about people than the smartphones in their pockets: phone numbers, current location, often the owner's real name—even a unique ID number that can never be changed or turned off.
  • An examination of 101 popular smartphone "apps"—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone's unique device ID to other companies without users' awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent age, gender and other personal details to outsiders.
  • The findings reveal the intrusive effort by online-tracking companies to gather personal data about people in order to flesh out detailed dossiers on them.
  • ...24 more annotations...
  • iPhone apps transmitted more data than the apps on phones using Google Inc.'s Android operating system. Because of the test's size, it's not known if the pattern holds among the hundreds of thousands of apps available.
  • TextPlus 4, a popular iPhone app for text messaging. It sent the phone's unique ID number to eight ad companies and the phone's zip code, along with the user's age and gender, to two of them.
  • Pandora, a popular music app, sent age, gender, location and phone identifiers to various ad networks. iPhone and Android versions of a game called Paper Toss—players try to throw paper wads into a trash can—each sent the phone's ID number to at least five ad companies. Grindr, an iPhone app for meeting gay men, sent gender, location and phone ID to three ad companies.
  • iPhone maker Apple Inc. says it reviews each app before offering it to users. Both Apple and Google say they protect users by requiring apps to obtain permission before revealing certain kinds of information, such as location.
  • The Journal found that these rules can be skirted. One iPhone app, Pumpkin Maker (a pumpkin-carving game), transmits location to an ad network without asking permission. Apple declines to comment on whether the app violated its rules.
  • With few exceptions, app users can't "opt out" of phone tracking, as is possible, in limited form, on regular computers. On computers it is also possible to block or delete "cookies," which are tiny tracking files. These techniques generally don't work on cellphone apps.
  • makers of TextPlus 4, Pandora and Grindr say the data they pass on to outside firms isn't linked to an individual's name. Personal details such as age and gender are volunteered by users, they say. The maker of Pumpkin Maker says he didn't know Apple required apps to seek user approval before transmitting location. The maker of Paper Toss didn't respond to requests for comment.
  • Many apps don't offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn't provide privacy policies on their websites or inside the apps at the time of testing. Neither Apple nor Google requires app privacy policies.
  • the most widely shared detail was the unique ID number assigned to every phone.
  • On iPhones, this number is the "UDID," or Unique Device Identifier. Android IDs go by other names. These IDs are set by phone makers, carriers or makers of the operating system, and typically can't be blocked or deleted. "The great thing about mobile is you can't clear a UDID like you can a cookie," says Meghan O'Holleran of Traffic Marketplace, an Internet ad network that is expanding into mobile apps. "That's how we track everything."
  • O'Holleran says Traffic Marketplace, a unit of Epic Media Group, monitors smartphone users whenever it can. "We watch what apps you download, how frequently you use them, how much time you spend on them, how deep into the app you go," she says. She says the data is aggregated and not linked to an individual.
  • Apple and Google ad networks let advertisers target groups of users. Both companies say they don't track individuals based on the way they use apps.
  • Apple limits what can be installed on an iPhone by requiring iPhone apps to be offered exclusively through its App Store. Apple reviews those apps for function, offensiveness and other criteria.
  • Apple says iPhone apps "cannot transmit data about a user without obtaining the user's prior permission and providing the user with access to information about how and where the data will be used." Many apps tested by the Journal appeared to violate that rule, by sending a user's location to ad networks, without informing users. Apple declines to discuss how it interprets or enforces the policy.
  • Google doesn't review the apps, which can be downloaded from many vendors. Google says app makers "bear the responsibility for how they handle user information." Google requires Android apps to notify users, before they download the app, of the data sources the app intends to access. Possible sources include the phone's camera, memory, contact list, and more than 100 others. If users don't like what a particular app wants to access, they can choose not to install the app, Google says.
  • Neither Apple nor Google requires apps to ask permission to access some forms of the device ID, or to send it to outsiders. When smartphone users let an app see their location, apps generally don't disclose if they will pass the location to ad companies.
  • Lack of standard practices means different companies treat the same information differently. For example, Apple says that, internally, it treats the iPhone's UDID as "personally identifiable information." That's because, Apple says, it can be combined with other personal details about people—such as names or email addresses—that Apple has via the App Store or its iTunes music services. By contrast, Google and most app makers don't consider device IDs to be identifying information.
  • A growing industry is assembling this data into profiles of cellphone users. Mobclix, the ad exchange, matches more than 25 ad networks with some 15,000 apps seeking advertisers. The Palo Alto, Calif., company collects phone IDs, encodes them (to obscure the number), and assigns them to interest categories based on what apps people download and how much time they spend using an app, among other factors. By tracking a phone's location, Mobclix also makes a "best guess" of where a person lives, says Mr. Gurbuxani, the Mobclix executive. Mobclix then matches that location with spending and demographic data from Nielsen Co.
  • Mobclix can place a user in one of 150 "segments" it offers to advertisers, from "green enthusiasts" to "soccer moms." For example, "die hard gamers" are 15-to-25-year-old males with more than 20 apps on their phones who use an app for more than 20 minutes at a time. Mobclix says its system is powerful, but that its categories are broad enough to not identify individuals. "It's about how you track people better," Mr. Gurbuxani says.
  • four app makers posted privacy policies after being contacted by the Journal, including Rovio Mobile Ltd., the Finnish company behind the popular game Angry Birds (in which birds battle egg-snatching pigs). A spokesman says Rovio had been working on the policy, and the Journal inquiry made it a good time to unveil it.
  • Free and paid versions of Angry Birds were tested on an iPhone. The apps sent the phone's UDID and location to the Chillingo unit of Electronic Arts Inc., which markets the games. Chillingo says it doesn't use the information for advertising and doesn't share it with outsiders.
  • Some developers feel pressure to release more data about people. Max Binshtok, creator of the DailyHoroscope Android app, says ad-network executives encouraged him to transmit users' locations. Mr. Binshtok says he declined because of privacy concerns. But ads targeted by location bring in two to five times as much money as untargeted ads, Mr. Binshtok says. "We are losing a lot of revenue."
  • Apple targets ads to phone users based largely on what it knows about them through its App Store and iTunes music service. The targeting criteria can include the types of songs, videos and apps a person downloads, according to an Apple ad presentation reviewed by the Journal. The presentation named 103 targeting categories, including: karaoke, Christian/gospel music, anime, business news, health apps, games and horror movies. People familiar with iAd say Apple doesn't track what users do inside apps and offers advertisers broad categories of people, not specific individuals. Apple has signaled that it has ideas for targeting people more closely. In a patent application filed this past May, Apple outlined a system for placing and pricing ads based on a person's "web history or search history" and "the contents of a media library." For example, home-improvement advertisers might pay more to reach a person who downloaded do-it-yourself TV shows, the document says.
  • The patent application also lists another possible way to target people with ads: the contents of a friend's media library. How would Apple learn who a cellphone user's friends are, and what kinds of media they prefer? The patent says Apple could tap "known connections on one or more social-networking websites" or "publicly available information or private databases describing purchasing decisions, brand preferences," and other data. In September, Apple introduced a social-networking service within iTunes, called Ping, that lets users share music preferences with friends. Apple declined to comment.
9More

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
23More

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
10More

'Scrapers' Dig Deep for Data on the Web - WSJ.com - 0 views

  • website PatientsLikeMe.com noticed suspicious activity on its "Mood" discussion board. There, people exchange highly personal stories about their emotional disorders, ranging from bipolar disease to a desire to cut themselves. It was a break-in. A new member of the site, using sophisticated software, was "scraping," or copying, every single message off PatientsLikeMe's private online forums.
  • PatientsLikeMe managed to block and identify the intruder: Nielsen Co., the privately held New York media-research firm. Nielsen monitors online "buzz" for clients, including major drug makers, which buy data gleaned from the Web to get insight from consumers about their products, Nielsen says.
  • The market for personal data about Internet users is booming, and in the vanguard is the practice of "scraping." Firms offer to harvest online conversations and collect personal details from social-networking sites, résumé sites and online forums where people might discuss their lives. The emerging business of web scraping provides some of the raw material for a rapidly expanding data economy. Marketers spent $7.8 billion on online and offline data in 2009, according to the New York management consulting firm Winterberry Group LLC. Spending on data from online sources is set to more than double, to $840 million in 2012 from $410 million in 2009.
  • ...6 more annotations...
  • The Wall Street Journal's examination of scraping—a trade that involves personal information as well as many other types of data—is part of the newspaper's investigation into the business of tracking people's activities online and selling details about their behavior and personal interests.
  • Some companies collect personal information for detailed background reports on individuals, such as email addresses, cell numbers, photographs and posts on social-network sites. Others offer what are known as listening services, which monitor in real time hundreds or thousands of news sources, blogs and websites to see what people are saying about specific products or topics.
  • One such service is offered by Dow Jones & Co., publisher of the Journal. Dow Jones collects data from the Web—which may include personal information contained in news articles and blog postings—that help corporate clients monitor how they are portrayed. It says it doesn't gather information from password-protected parts of sites.
  • The competition for data is fierce. PatientsLikeMe also sells data about its users. PatientsLikeMe says the data it sells is anonymized, no names attached.
  • Nielsen spokesman Matt Anchin says the company's reports to its clients include publicly available information gleaned from the Internet, "so if someone decides to share personally identifiable information, it could be included."
  • Internet users often have little recourse if personally identifiable data is scraped: There is no national law requiring data companies to let people remove or change information about themselves, though some firms let users remove their profiles under certain circumstances.
  •  
    he market for personal data about Internet users is booming, and in the vanguard is the practice of "scraping." Firms offer to harvest online conversations and collect personal details from social-networking sites, résumé sites and online forums where people might discuss their lives.
6More

Land Destroyer: Alternative Economics - 0 views

  • Peer to peer file sharing (P2P) has made media distribution free and has become the bane of media monopolies. P2P file sharing means digital files can be copied and distributed at no cost. CD's, DVD's, and other older forms of holding media are no longer necessary, nor is the cost involved in making them or distributing them along a traditional logistical supply chain. Disc burners, however, allow users the ability to create their own physical copies at a fraction of the cost of buying the media from the stores. Supply and demand is turned on its head as the more popular a certain file becomes via demand, the more of it that is available for sharing, and the easier it is to obtain. Supply and demand increase in tandem towards a lower "price" of obtaining the said file.Consumers demand more as price decreases. Producersnaturally want to produce more of something as priceincreases. Somewhere in between consumers and producers meet at the market price or "marketequilibrium."P2P technology eliminates material scarcity, thus the more afile is in demand, the more people end up downloading it, andthe easier it is for others to find it and download it. Considerthe implications this would have if technology made physicalobjects as easy to "share" as information is now.
  • In the end, it is not government regulations, legal contrivances, or licenses that govern information, but rather the free market mechanism commonly referred to as Adam Smith's self regulating "Invisible Hand of the Market." In other words, people selfishly seeking accurate information for their own benefit encourage producers to provide the best possible information to meet their demand. While this is not possible in a monopoly, particularly the corporate media monopoly of the "left/right paradigm" of false choice, it is inevitable in the field of real competition that now exists online due to information technology.
  • Compounding the establishment's troubles are cheaper cameras and cheaper, more capable software for 3D graphics, editing, mixing, and other post production tasks, allowing for the creation of an alternative publishing, audio and video industry. "Underground" counter-corporate music and film has been around for a long time but through the combination of technology and the zealous corporate lawyers disenfranchising a whole new generation that now seeks an alternative, it is truly coming of age.
  • ...3 more annotations...
  • With a growing community of people determined to become collaborative producers rather than fit into the producer/consumer paradigm, and 3D files for physical objects already being shared like movies and music, the implications are profound. Products, and the manufacturing technology used to make them will continue to drop in price, become easier to make for individuals rather than large corporations, just as media is now shifting into the hands of the common people. And like the shift of information, industry will move from the elite and their agenda of preserving their power, to the end of empowering the people.
  • In a future alternative economy where everyone is a collaborative designer, producer, and manufacturer instead of passive consumers and when problems like "global climate change," "overpopulation," and "fuel crises" cross our path, we will counter them with technical solutions, not political indulgences like carbon taxes, and not draconian decrees like "one-child policies."
  • We will become the literal architects of our own future in this "personal manufacturing" revolution. While these technologies may still appear primitive, or somewhat "useless" or "impractical" we must remember where our personal computers stood on the eve of the dawning of the information age and how quickly they changed our lives. And while many of us may be unaware of this unfolding revolution, you can bet the globalists, power brokers, and all those that stand to lose from it not only see it but are already actively fighting against it.Understandably it takes some technical know-how to jump into the personal manufacturing revolution. In part 2 of "Alternative Economics" we will explore real world "low-tech" solutions to becoming self-sufficient, local, and rediscover the empowerment granted by doing so.
3More

Hacker attacks threaten to dampen cloud computing's prospects | Reuters - 0 views

  • Security is a hot issue in the computing world. Hackers broke into Sony's networks and accessed the information of more than 1 million customers, the latest of several security breaches.The breaches were the latest attacks on high-profile firms, including defense contractor Lockheed Martin and Google, which pointed the blame at China.
  • Analysts and industry experts believe hardware-based security provides a higher level of protection than software with encryption added to data in the servers. Chipmakers are working to build more authentication into the silicon.
  • one of the problems cloud faces is that it is a fragmented market where many vendors provide different security solutions based on their own standards.Intel's rival ARM and Advanced Micro Devices are also in the process of embedding higher security in their chips and processors, but working with different partners.If there was an open standard to follow, it would help the industry to build a much secure cloud system, according to AMD.
11More

Jonathan Stray » Measuring and improving accuracy in journalism - 0 views

  • Accuracy is a hard thing to measure because it’s a hard thing to define. There are subjective and objective errors, and no standard way of determining whether a reported fact is true or false
  • The last big study of mainstream reporting accuracy found errors (defined below) in 59% of 4,800 stories across 14 metro newspapers. This level of inaccuracy — where about one in every two articles contains an error — has persisted for as long as news accuracy has been studied, over seven decades now.
  • With the explosion of available information, more than ever it’s time to get serious about accuracy, about knowing which sources can be trusted. Fortunately, there are emerging techniques that might help us to measure media accuracy cheaply, and then increase it.
  • ...7 more annotations...
  • We could continuously sample a news source’s output to produce ongoing accuracy estimates, and build social software to help the audience report and filter errors. Meticulously applied, this approach would give a measure of the accuracy of each information source, and a measure of the efficiency of their corrections process (currently only about 3% of all errors are corrected.)
  • Real world reporting isn’t always clearly “right” or “wrong,” so it will often be hard to decide whether something is an error or not. But we’re not going for ultimate Truth here,  just a general way of measuring some important aspect of the idea we call “accuracy.” In practice it’s important that the error counting method is simple, clear and repeatable, so that you can compare error rates of different times and sources.
  • Subjective errors, though by definition involving judgment, should not be dismissed as merely differences in opinion. Sources found such errors to be about as common as factual errors and often more egregious [as rated by the sources.] But subjective errors are a very complex category
  • One of the major problems with previous news accuracy metrics is the effort and time required to produce them. In short, existing accuracy measurement methods are expensive and slow. I’ve been wondering if we can do better, and a simple idea comes to mind: sampling. The core idea is this: news sources could take an ongoing random sample of their output and check it for accuracy — a fact check spot check
  • Standard statistical theory tells us what the error on that estimate will be for any given number of samples (If I’ve got this right, the relevant formula is standard error of a population proportion estimate without replacement.) At a sample rate of a few stories per day, daily estimates of error rate won’t be worth much. But weekly and monthly aggregates will start to produce useful accuracy estimates
  • the first step would be admitting how inaccurate journalism has historically been. Then we have to come up with standardized accuracy evaluation procedures, in pursuit of metrics that capture enough of what we mean by “true” to be worth optimizing. Meanwhile, we can ramp up the efficiency of our online corrections processes until we find as many useful, legitimate errors as possible with as little staff time as possible. It might also be possible do data mining on types of errors and types of stories to figure out if there are patterns in how an organization fails to get facts right.
  • I’d love to live in a world where I could compare the accuracy of information sources, where errors got found and fixed with crowd-sourced ease, and where news organizations weren’t shy about telling me what they did and did not know. Basic factual accuracy is far from the only measure of good journalism, but perhaps it’s an improvement over the current sad state of affairs
  •  
    Professional journalism is supposed to be "factual," "accurate," or just plain true. Is it? Has news accuracy been getting better or worse in the last decade? How does it vary between news organizations, and how do other information sources rate? Is professional journalism more or less accurate than everything else on the internet? These all seem like important questions, so I've been poking around, trying to figure out what we know and don't know about the accuracy of our news sources. Meanwhile, the online news corrections process continues to evolve, which gives us hope that the news will become more accurate in the future.
7More

Google Chrome OS: Ditch Your Hard Drives, the Future Is the Web | Gadget Lab | Wired.com - 2 views

  • With a strong focus on speed, the Chrome OS promises nearly instant boot times of about 7 seconds for users to login to their computers.
  • t will not be available as a download to run and install. Instead, Chrome OS is only shipping on specific hardware from manufacturers Google has partnered with. That means if you want Chrome OS, you’ll have to purchase a Chrome OS device.
  • Chrome OS netbooks will not have traditional hard disk drives — they will rely on non-volatile flash memory and internet-based storage for saving all of your data.
    • Weiye Loh
       
      So who's going to own my data? me? or Google? is it going to be secure? what happens when there's a breach of privacy? Do i have to sign a disclaimer before  I use it? hmm. 
    • Jun Jie Tan
       
      on the internet, google owns you
  • ...1 more annotation...
  • All the applications will be web-based, meaning users won’t have to install apps, manage updates or even backup their data. All data will be stored in the cloud, and users won’t even have to bother with anti-virus software: Google claims it will monitor code to prevent malicious activity in Chrome OS web apps.
  •  
    Chrome OS netbooks will not have traditional hard disk drives - they will rely on non-volatile flash memory and internet-based storage for saving all of your data.
5More

Designers Make Data Much Easier to Digest - NYTimes.com - 0 views

  • On the benefit side, people become more engaged when they can filter information that is presented visually and make discoveries on their own. On the risk side, Professor Shneiderman says, tools as powerful as visualizations have the potential to mislead or confuse consumers. And privacy implications arise, he says, as increasing amounts of personal, housing, medical and financial data become widely accessible, searchable and viewable.
  • In the 1990s, Professor Shneiderman developed tree mapping, which uses interlocking rectangles to represent complicated data sets. The rectangles are sized and colored to convey different kinds of information, like revenue or geographic region, says Jim Bartoo, the chief executive of the Hive Group, a software company that uses tree mapping to help companies and government agencies monitor operational data. When executives or plant managers see the nested rectangles grouped together, he adds, they should be able to immediately spot anomalies or trends. In one tree-map visualization of a sales department on the Hive Group site, red tiles represent underperforming sales representatives while green tiles represent people who exceeded their sales quotas. So it’s easy to identify the best sales rep in the company: the biggest green tile. But viewers can also reorganize the display — by region, say, or by sales manager — to see whether patterns exist that explain why some employees are falling behind. “It’s the ability of the human brain to pick out size and color” that makes tree mapping so intuitive, Mr. Bartoo says. Information visualization, he adds, “suddenly starts answering questions that you didn’t know you had.”
  • data visualization is no longer just a useful tool for researchers and corporations. It’s also an entertainment and marketing vehicle.
  • ...2 more annotations...
  • In 2009, for example, Stamen Design, a technology and design studio in San Francisco, created a live visualization of Twitter traffic during the MTV Video Music awards. In the animated graphic, floating bubbles, each displaying a photograph of a celebrity, expanded or contracted depending on the volume of Twitter activity about each star. The project provided a visceral way for viewers to understand which celebrities dominated Twitter talk in real time, says Eric Rodenbeck, the founder and creative director of Stamen Design.
  • Designers once created visual representations of data that would steer viewers to information that seemed the most important or newsworthy, he says; now they create visualizations that contain attractive overview images and then let users direct their own interactive experience — wherever it may take them. “It’s not about leading with a certain view anymore,” he says. “It’s about delivering the view that gets the most participation and engagement.”
8More

Evolutionary analysis shows languages obey few ordering rules - 0 views

  • The authors of the new paper point out just how hard it is to study languages. We're aware of over 7,000 of them, and they vary significantly in complexity. There are a number of large language families that are likely derived from a single root, but a large number of languages don't slot easily into one of the major groups. Against that backdrop, even a set of simple structural decisions—does the noun or verb come first? where does the preposition go?—become dizzyingly complex, with different patterns apparent even within a single language tree.
  • Linguists, however, have been attempting to find order within the chaos. Noam Chomsky helped establish the Generative school of thought, which suggests that there must be some constraints to this madness, some rules that help make a language easier for children to pick up, and hence more likely to persist. Others have approached this issue via a statistical approach (the authors credit those inspired by Joseph Greenberg for this), looking for word-order rules that consistently correlate across language families. This approach has identified a handful of what may be language universals, but our uncertainty about language relationships can make it challenging to know when some of these are correlations are simply derived from a common inheritance.
  • For anyone with a biology background, having traits shared through common inheritance should ring a bell. Evolutionary biologists have long been able to build family trees of related species, called phylogenetic trees. By figuring out what species have the most traits in common and grouping them together, it's possible to identify when certain features have evolved in the past. In recent years, the increase in computing power and DNA sequences to align has led to some very sophisticated phylogenetic software, which can analyze every possible tree and perform a Bayesian statistical analysis to figure out which trees are most likely to represent reality. By treating language features like subject-verb order as a trait, the authors were able to perform this sort of analysis on four different language families: 79 Indo-European languages, 130 Austronesian languages, 66 Bantu languages, and 26 Uto-Aztecan languages. Although we don't have a complete roster of the languages in those families, they include over 2,400 languages that have been evolving for a minimum of 4,000 years.
  • ...4 more annotations...
  • The results are bad news for universalists: "most observed functional dependencies between traits are lineage-specific rather than universal tendencies," according to the authors. The authors were able to identify 19 strong correlations between word order traits, but none of these appeared in all four families; only one of them appeared in more than two. Fifteen of them only occur in a single family. Specific predictions based on the Greenberg approach to linguistics also failed to hold up under the phylogenetic analysis. "Systematic linkages of traits are likely to be the rare exception rather than the rule," the authors conclude.
  • If universal features can't account for what we observe, what can? Common descent. "Cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states."
  • it still leaves a lot of areas open for linguists to argue about. And the study did not build an exhaustive tree of any of the language families, in part because we probably don't have enough information to classify all of them at this point.
  • Still, it's hard to imagine any further details could overturn the gist of things, given how badly features failed to correlate across language families. And the work might be well received in some communities, since it provides an invitation to ask a fascinating question: given that there aren't obvious word order patterns across languages, how does the human brain do so well at learning the rules that are a peculiarity to any one of them?
  •  
    young children can easily learn to master more than one language in an astonishingly short period of time. This has led a number of linguists, most notably Noam Chomsky, to suggest that there might be language universals, common features of all languages that the human brain is attuned to, making learning easier; others have looked for statistical correlations between languages. Now, a team of cognitive scientists has teamed up with an evolutionary biologist to perform a phylogenetic analysis of language families, and the results suggest that when it comes to the way languages order key sentence components, there are no rules.
5More

When information is power, these are the questions we should be asking | Online Journal... - 0 views

  • “There is absolutely no empiric evidence that shows that anyone actually uses the accounts produced by public bodies to make any decision. There is no group of principals analogous to investors. There are many lists of potential users of the accounts. The Treasury, CIPFA (the UK public sector accounting body) and others have said that users might include the public, taxpayers, regulators and oversight bodies. I would be prepared to put up a reward for anyone who could prove to me that any of these people have ever made a decision based on the financial reports of a public body. If there are no users of the information then there is no point in making the reports better. If there are no users more technically correct reports do nothing to improve the understanding of public finances. In effect all that better reports do is legitimise the role of professional accountants in the accountability process.
  • raw data – and the ability to interrogate that – should instead be made available because (quoting Anthony Hopwood): “Those with the power to determine what enters into organisational accounts have the means to articulate and diffuse their values and concerns, and subsequently to monitor, observe and regulate the actions of those that are now accounted for.”
  • Data is not just some opaque term; something for geeks: it’s information: the raw material we deal in as journalists. Knowledge. Power. The site of a struggle for control. And considering it’s a site that journalists have always fought over, it’s surprisingly placid as we enter one of the most important ages in the history of information control.
  • ...1 more annotation...
  • 3 questions to ask of any transparency initiative: If information is to be published in a database behind a form, then it’s hidden in plain sight. It cannot be easily found by a journalist, and only simple questions will be answered. If information is to be published in PDFs or JPEGs, or some format that you need proprietary software to see, then it cannot be easily be questioned by a journalist If you will have to pass a test to use the information, then obstacles will be placed between the journalist and that information The next time an organisation claims that they are opening up their information, tick those questions off. (If you want more, see Gurstein’s list of 7 elements that are needed to make effective use of open data).
  •  
    control of information still represents the exercise of power, and how shifts in that control as a result of the transparency/open data/linked data agenda are open to abuse, gaming, or spin.
6More

Net neutrality enshrined in Dutch law | Technology | guardian.co.uk - 0 views

  • The measure, which was adopted with a broad majority in the lower house of parliament, will prevent KPN, the Dutch telecommunications market leader, and the Dutch arms of Vodafone and T-Mobile from blocking or charging for internet services like Skype or WhatsApp, a free text service. Its sponsors said that the measure would pass a pro forma review in the Dutch senate.
  • The Dutch restrictions on operators are the first in the EU. The European commission and European parliament have endorsed network neutrality guidelines but have not yet taken legal action against operators that block or impose extra fees on consumers using services such as Skype, the voice and video service being acquired by Microsoft, and WhatsApp, a mobile software maker based in California.
  • Advocates hailed the move as a victory for consumers, while industry officials predicted that mobile broadband charges could rise in the Netherlands to compensate for the new restrictions.
  • ...2 more annotations...
  • Only one other country, Chile, has written network neutrality requirements into its telecommunications law. The Chilean law, which was approved in July 2010, took effect in May.
  • In the US, an attempt by the Federal Communications Commission to impose a similar set of network neutrality restrictions on American operators has been tied up in legal challenges from the industry.
  •  
    The Netherlands has become the first country in Europe to enshrine the concept of network neutrality into national law by banning its mobile telephone operators from blocking or charging consumers extra for using internet-based communications services.
4More

Facebook blocks Google Chrome extension for exporting friends | ZDNet - 0 views

  • Facebook Friend Exporter wasn’t designed with Google+ in mind (version 1 was in fact released in November 2010), but it has exploded in the past week as Google+ beta users look for ways to port over all their Facebook friends to Google+. Facebook clearly noticed a spike in usage (the extension now has more than 17,000 users), and decided to block it. Mansour says that Facebook removed emails from their mobile site, which were critical to the original design of his extension. He told me that the company had implemented a throttling mechanism: if you visit any friend page five times in a short period of time, the email field is removed.
  • “Facebook is actually hiding data (email) from you to see when your friends explicitly shared that to you,” Mansour told me in an e-mail. “Making it really hard to scrape because the only missing data is your emails, and that is your friends identity. Nothing else is.”
  • As CNET points out, Facebook Friend Exporter technically violates Facebook’s Terms of Service. Section 3.2 states the following: You will not collect users’ content or information, or otherwise access Facebook, using automated means (such as harvesting bots, robots, spiders, or scrapers) without our permission. Mansour doesn’t care about this, as he says in the extension’s description: Get *your* data contact out of Facebook, whether they want you to or not. You gave them your friends and allowed them to store that data, and you have right to take it back out! Facebook doesn’t own my friends.
  • ...1 more annotation...
  • After he found out that Facebook was throttling the email field once his extension got popular, he wrote the following on his Google+ profile: I am bloody annoyed now, because this proves Facebook owns every users data on Facebook. You don’t own anything!
1More

Google is funding a new software project that will automate writing local news - Recode - 0 views

  •  
    "Radar aims to automate local reporting with large public databases from government agencies or local law enforcement - basically roboticizing the work of reporters. Stories from the data will be penned using Natural Language Generation, which converts information gleaned from the data into words. The robotic reporters won't be working alone. The grant includes funds allocated to hire five journalists to identify datasets, as well as curate and edit the news articles generated from Radar. The project also aims to create automated ways to add images and video to robot-made stories."
‹ Previous 21 - 40 of 40
Showing 20 items per page