Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Software" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Weiye Loh

Open science: a future shaped by shared experience | Education | The Observer - 0 views

  • one day he took one of these – finding a mathematical proof about the properties of multidimensional objects – and put his thoughts on his blog. How would other people go about solving this conundrum? Would somebody else have any useful insights? Would mathematicians, notoriously competitive, be prepared to collaborate? "It was an experiment," he admits. "I thought it would be interesting to try."He called it the Polymath Project and it rapidly took on a life of its own. Within days, readers, including high-ranking academics, had chipped in vital pieces of information or new ideas. In just a few weeks, the number of contributors had reached more than 40 and a result was on the horizon. Since then, the joint effort has led to several papers published in journals under the collective pseudonym DHJ Polymath. It was an astonishing and unexpected result.
  • "If you set out to solve a problem, there's no guarantee you will succeed," says Gowers. "But different people have different aptitudes and they know different tricks… it turned out their combined efforts can be much quicker."
  • There are many interpretations of what open science means, with different motivations across different disciplines. Some are driven by the backlash against corporate-funded science, with its profit-driven research agenda. Others are internet radicals who take the "information wants to be free" slogan literally. Others want to make important discoveries more likely to happen. But for all their differences, the ambition remains roughly the same: to try and revolutionise the way research is performed by unlocking it and making it more public.
  • ...10 more annotations...
  • Jackson is a young bioscientist who, like many others, has discovered that the technologies used in genetics and molecular biology, once the preserve of only the most well-funded labs, are now cheap enough to allow experimental work to take place in their garages. For many, this means that they can conduct genetic experiments in a new way, adopting the so-called "hacker ethic" – the desire to tinker, deconstruct, rebuild.
  • The rise of this group is entertainingly documented in a new book by science writer Marcus Wohlsen, Biopunk (Current £18.99), which describes the parallels between today's generation of biological innovators and the rise of computer software pioneers of the 1980s and 1990s. Indeed, Bill Gates has said that if he were a teenager today, he would be working on biotechnology, not computer software.
  • open scientists suggest that it doesn't have to be that way. Their arguments are propelled by a number of different factors that are making transparency more viable than ever.The first and most powerful change has been the use of the web to connect people and collect information. The internet, now an indelible part of our lives, allows like-minded individuals to seek one another out and share vast amounts of raw data. Researchers can lay claim to an idea not by publishing first in a journal (a process that can take many months) but by sharing their work online in an instant.And while the rapidly decreasing cost of previously expensive technical procedures has opened up new directions for research, there is also increasing pressure for researchers to cut costs and deliver results. The economic crisis left many budgets in tatters and governments around the world are cutting back on investment in science as they try to balance the books. Open science can, sometimes, make the process faster and cheaper, showing what one advocate, Cameron Neylon, calls "an obligation and responsibility to the public purse".
  • "The litmus test of openness is whether you can have access to the data," says Dr Rufus Pollock, a co-founder of the Open Knowledge Foundation, a group that promotes broader access to information and data. "If you have access to the data, then anyone can get it, use it, reuse it and redistribute it… we've always built on the work of others, stood on the shoulders of giants and learned from those who have gone before."
  • moves are afoot to disrupt the closed world of academic journals and make high-level teaching materials available to the public. The Public Library of Science, based in San Francisco, is working to make journals more freely accessible
  • it's more than just politics at stake – it's also a fundamental right to share knowledge, rather than hide it. The best example of open science in action, he suggests, is the Human Genome Project, which successfully mapped our DNA and then made the data public. In doing so, it outflanked J Craig Venter's proprietary attempt to patent the human genome, opening up the very essence of human life for science, rather than handing our biological information over to corporate interests.
  • the rise of open science does not please everyone. Critics have argued that while it benefits those at either end of the scientific chain – the well-established at the top of the academic tree or the outsiders who have nothing to lose – it hurts those in the middle. Most professional scientists rely on the current system for funding and reputation. Others suggest it is throwing out some of the most important elements of science and making deep, long-term research more difficult.
  • Open science proponents say that they do not want to make the current system a thing of the past, but that it shouldn't be seen as immutable either. In fact, they say, the way most people conceive of science – as a highly specialised academic discipline conducted by white-coated professionals in universities or commercial laboratories – is a very modern construction.It is only over the last century that scientific disciplines became industrialised and compartmentalised.
  • open scientists say they don't want to throw scientists to the wolves: they just want to help answer questions that, in many cases, are seen as insurmountable.
  • "Some people, very straightforwardly, said that they didn't like the idea because it undermined the concept of the romantic, lone genius." Even the most dedicated open scientists understand that appeal. "I do plan to keep going at them," he says of collaborative projects. "But I haven't given up on solitary thinking about problems entirely."
Weiye Loh

Facial Recognition Software Singles Out Innocent Man | The Utopianist - Think Bigger - 0 views

  • Gass was at home when he got a letter from the Massachusetts Registry of Motor Vehicles saying his license had been revoked. Why? The Boston Globe explains: An antiterrorism computerized facial recognition system that scans a database of millions of state driver’s license images had picked his as a possible fraud. It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.
  •  
    While a boon to police departments looking to save time and money fighting identity fraud, it's frightening to think that people are having their lives seriously disrupted thanks to computer errors. If you are, say, a truck driver, something like this could cause you weeks of lost pay, something many Americans just can't afford to do. And what if this technology expands beyond just rooting out identity fraud? What if you were slammed against a car hood as police falsely identified you as a criminal? The fact that Hass didn't even have a chance to fight the computer's findings before his license was suspended is especially disturbing. What would you do if this happened to you?
Weiye Loh

Measuring the Unmeasurable (Internet) and Why It Matters « Gurstein's Community Informatics - 0 views

  • it appears that there is a quite significant hole in the National Accounting (and thus the GDP statistics) around Internet related activities since most of this accounting is concerned with measuring the production and distribution of tangible products and the associated services. For the most part the available numbers don’t include many Internet (or “social capital” e.g. in health and education) related activities as they are linked to intangible outputs. The significance of not including social capital components in the GDP has been widely discussed elsewhere. The significance (and potential remediation) of the absence of much of the Internet related activities was the subject of the workshop.
  • there had been a series of critiques of GDP statistics from Civil Society (CS) over the last few years—each associated with a CS “movements—the Woman’s Movement and the absence of measurement of “women’s (and particularly domestic) work”; the Environmental Movement and the absence of the longer term and environmental costs of the production of the goods that the GDP so blithely counts as a measure of national economic well-being; and most recently with the Sustainability Movement, and the absence of measures reflective of the longer term negative effects/costs of resource depletion and environmental degradation. What I didn’t see anywhere apart from the background discussions to the OECD workshop itself were critiques reflecting issues related to the Internet or ICTs.
  • the implications of the limitations in the Internet accounting went beyond a simple technical glitch and had potentially quite profound implications from a national policy and particularly a CS and community based development perspective. The possible distortions in economic measurement arising from the absence of Internet associated numbers in the SNA (there may be some $750 BILLION a year in “value’ being generated by Internet based search alone!) lead to the very real possibility that macro-economic analysis and related policy making may be operating on the basis of inadequate and even fallacious assumptions.
  • ...2 more annotations...
  • perhaps of greatest significance from the perspective of Civil Society and of communities is the overall absence of measurement and thus inclusion in the economic accounting of the value of the contributions provided to, through and on the Internet of various voluntary and not-for-profit initiatives and activities. Thus for example, the millions of hours of labour contributed to Wikipedia, or to the development of Free or Open Source software, or to providing support for public Internet access and training is not included as a net contribution or benefit to the economy (as measured through the GDP). Rather, this is measured as a negative effect since, as some would argue, those who are making this contribution could be using their time and talents in more “productive” (and “economically measurable”) activities. Thus for example, a region or country that chooses to go with free or open source software as the basis for its in-school computing is not only “not contributing to ‘economic well being’” it is “statistically” a “cost” to the economy since it is not allowing for expenditures on, for example, suites of Microsoft products.
  • there appears to have been no systematic attention paid to the relationship of the activities and growth of voluntary contributions to the Internet and the volume, range and depth of Internet activity, digital literacy and economic value being derived from the use of the Internet.
Weiye Loh

Roger Pielke Jr.'s Blog: Faith-Based Education and a Return to Shop Class - 0 views

  • In the United States, nearly a half century of research, application of new technologies and development of new methods and policies has failed to translate into improved reading abilities for the nation’s children1.
  • the reasons why progress has been so uneven point to three simple rules for anticipating when more research and development (R&D) could help to yield rapid social progress. In a world of limited resources, the trick is distinguishing problems amenable to technological fixes from those that are not. Our rules provide guidance\ in making this distinction . . .
  • unlike vaccines, the textbooks and software used in education do not embody the essence of what needs to be done. That is, they don’t provide the basic ‘go’ of teaching and learning. That depends on the skills of teachers and on the attributes of classrooms and students. Most importantly, the effectiveness of a vaccine is largely independent of who gives or receives it, and of the setting in which it is given.
  • ...5 more annotations...
  • The three rules for a technological fix proposed by Sarewitz and Nelson are: I. The technology must largely embody the cause–effect relationship connecting problem to solution. II. The effects of the technological fix must be assessable using relatively unambiguous or uncontroversial criteria. III. Research and development is most likely to contribute decisively to solving a social problem when it focuses on improving a standardized technical core that already exists.
  • technology in the classroom fails with respect to each of the three criteria: (a) technology is not a causal factor in learning in the sense that more technology means more learning, (b) assessment of educational outcome sis itself difficult and contested, much less disentangling various causal factors, and (c) the lack of evidence that technology leads to improved educational outcomes means that there is no such standardized technological core.
  • This conundrum calls into question one of the most significant contemporary educational movements. Advocates for giving schools a major technological upgrade — which include powerful educators, Silicon Valley titans and White House appointees — say digital devices let students learn at their own pace, teach skills needed in a modern economy and hold the attention of a generation weaned on gadgets. Some backers of this idea say standardized tests, the most widely used measure of student performance, don’t capture the breadth of skills that computers can help develop. But they also concede that for now there is no better way to gauge the educational value of expensive technology investments.
  • absent clear proof, schools are being motivated by a blind faith in technology and an overemphasis on digital skills — like using PowerPoint and multimedia tools — at the expense of math, reading and writing fundamentals. They say the technology advocates have it backward when they press to upgrade first and ask questions later.
  • [D]emand for educated labour is being reconfigured by technology, in much the same way that the demand for agricultural labour was reconfigured in the 19th century and that for factory labour in the 20th. Computers can not only perform repetitive mental tasks much faster than human beings. They can also empower amateurs to do what professionals once did: why hire a flesh-and-blood accountant to complete your tax return when Turbotax (a software package) will do the job at a fraction of the cost? And the variety of jobs that computers can do is multiplying as programmers teach them to deal with tone and linguistic ambiguity. Several economists, including Paul Krugman, have begun to argue that post-industrial societies will be characterised not by a relentless rise in demand for the educated but by a great “hollowing out”, as mid-level jobs are destroyed by smart machines and high-level job growth slows. David Autor, of the Massachusetts Institute of Technology (MIT), points out that the main effect of automation in the computer era is not that it destroys blue-collar jobs but that it destroys any job that can be reduced to a routine. Alan Blinder, of Princeton University, argues that the jobs graduates have traditionally performed are if anything more “offshorable” than low-wage ones. A plumber or lorry-driver’s job cannot be outsourced to India.
  •  
    In 2008 Dick Nelson and Dan Sarewitz had a commentary in Nature (here in PDF) that eloquently summarized why it is that we should not expect technology in the classroom to reault in better educational outcomes as they suggest we should in the case of a tehcnology like vaccines
Weiye Loh

A Data Divide? Data "Haves" and "Have Nots" and Open (Government) Data « Gurstein's Community Informatics - 0 views

  • Researchers have extensively explored the range of social, economic, geographical and other barriers which underlie and to a considerable degree “explain” (cause) the Digital Divide.  My own contribution has been to argue that “access is not enough”, it is whether opportunities and pre-conditions are in place for the “effective use” of the technology particularly for those at the grassroots.
  • The idea of a possible parallel “Data Divide” between those who have access and the opportunity to make effective use of data and particularly “open data” and those who do not, began to occur to me.  I was attending several planning/recruitment events for the Open Data “movement” here in Vancouver and the socio-demographics and some of the underlying political assumptions seemed to be somewhat at odds with the expressed advocacy position of “data for all”.
  • Thus the “open data” which was being argued for would not likely be accessible and usable to the groups and individuals with which Community Informatics has largely been concerned – the grassroots, the poor and marginalized, indigenous people, rural people and slum dwellers in Less Developed countries. It was/is hard to see, given the explanations, provided to date how these folks could use this data in any effective way to help them in responding to the opportunities for advance and social betterment which open data advocates have been indicating as the outcome of their efforts.
  • ...5 more annotations...
  • many involved in “open data” saw their interests and activities being confined to making data ‘legally” and “technically” accessible — what happened to it after that was somebody else’s responsibility.
  • while the Digital Divide deals with, for the most part “infrastructure” issues, the Data Divide is concerned with “content” issues.
  • where a Digital Divide might exist for example, as a result of geographical or policy considerations and thus have uniform effects on all those on the wrong side of the “divide” whatever their socio-demographic situation; a Data Divide and particularly one of the most significant current components of the Open Data movement i.e. OGD, would have particularly damaging negative effects and result in particularly significant lost opportunities for the most vulnerable groups and individuals in society and globally. (I’ve discussed some examples here at length in a previous blogpost.)
  • Data Divide thus would be the gap between those who have access to and are able to use Open (Government) Data and those who are not so enabled.
  • 1. infrastructure—being on the wrong side of the “Digital Divide” and thus not having access to the basic infrastructure supporting the availability of OGD. 2. devices—OGD that is not universally accessible and device independent (that only runs on I-Phones for example) 3. software—“accessible” OGD that requires specialized technical software/training to become “usable” 4. content—OGD not designed for use by those with handicaps, non-English speakers, those with low levels of functional literacy for example 5.  interpretation/sense-making—OGD that is only accessible for use through a technical intermediary and/or is useful only if “interpreted” by a professional intermediary 6. advocacy—whether the OGD is in a form and context that is supportive for use in advocacy (or other purposes) on behalf of marginalized and other groups and individuals 7. governance—whether the OGD process includes representation from the broad public in its overall policy development and governance (not just lawyers, techies and public servants).
Weiye Loh

Stanford Security Lab Tracks Do Not Track - 0 views

  • What they found is that more than half the NAI member companies did not remove tracking codes after someone opted out.
  • At least eight NAI members promise to stop tracking after opting out, but nonetheless leave tracking cookies in place.
  • I take that to mean that the other 25 companies never actually said they would remove tracking cookies, it’s just that they belong to a fellowship that wishes they would. On the positive side, ten companies went beyond what their privacy policy promises (say that three times fast) and two companies were “taking overt steps to respect Do Not Track.”
  • ...2 more annotations...
  • There’s probably a small percentage of companies who will blatantly ignore any attempts to stop tracking. For the rest, it’s more likely a case of not having procedures in place. Their intentions are good, but lack of manpower and the proper tech is probably what’s keeping them from following through on those good thoughts.
  • Since they can’t go after them with big guns, the Stanford study went with public embarrassment. They’ve published a list of the websites showing which ones are compliant and which ones aren’t. If you’re working with an ad network, you might want to check it out.
  •  
    The folks at the Stanford Security Lab are a suspicious bunch. Since they're studying how to make computers more secure, I guess it comes with the territory. Their current interest is tracking cookies and the Do Not Track opt-out process. Using "experimental software," they conducted a survey to see how many members of the Network Advertising Initiative (NAI), actually complied with the new Do Not Track initiatives.
Weiye Loh

Harvard professor spots Web search bias - Business - The Boston Globe - 0 views

  • Sweeney said she has no idea why Google searches seem to single out black-sounding names. There could be myriad issues at play, some associated with the software, some with the people searching Google. For example, the more often searchers click on a particular ad, the more frequently it is displayed subsequently. “Since we don’t know the reason for it,” she said, “it’s hard to say what you need to do.”
  • But Danny Sullivan, editor of SearchEngineLand.com, an online trade publication that tracks the Internet search and advertising business, said Sweeney’s research has stirred a tempest in a teapot. “It looks like this fairly isolated thing that involves one advertiser.” He also said that the results could be caused by black Google users clicking on those ads as much as white users. “It could be that black people themselves could be causing the stuff that causes the negative copy to be selected more,” said Sullivan. “If most of the searches for black names are done by black people . . . is that racially biased?”
  • On the other hand, Sullivan said Sweeney has uncovered a problem with online searching — the casual display of information that might put someone in a bad light. Rather than focusing on potential instances of racism, he said, search services such as Google might want to put more restrictions on displaying negative information about anyone, black or white.
juliet huang

Virus as a call for help, as a part of a larger social problem - 7 views

I agree with this view, and I also add on that yes, it is probably more profitable for the capitalist, wired society to continue creating anti-virus programs, open more it repair shops etc, than to...

Virus

Jody Poh

U.S. students fight copyright law - 9 views

http://www.nytimes.com/2007/10/11/technology/11iht-download.1.7846678.html?scp=20&sq=copyright&st=Search A student previously fined for breaking copyright laws at Brown University on Rhode Island ...

copyright :file sharing" "Intellectual property rights"

started by Jody Poh on 25 Aug 09 no follow-up yet
Weiye Loh

P2P Foundation » Blog Archive » Crowdsourced curation, reputation systems, and the social graph - 0 views

  • A good example of manual curation vs. crowdsourced curation is the competing app markets on the Apple iPhone and Google Android phone operating systems.
  • Apple is a monarchy, albeit with a wise and benevolent king. Android is burgeoning democracy, inefficient and messy, but free. Apple is the last, best example of the Industrial Age and its top-down, mass market/mass production paradigm.
  • They manufacture cool. They rely on “consumers”, and they protect those consumers from too many choices by selecting what is worthy, and what is not.
  • ...8 more annotations...
  • systems that allow crowdsourced judgment to be tweaked, not to the taste of the general mass, which produces lowest common denominator effects, but to people and experts that you can trust for their judgment.
  • these systems are now implemented by Buzz and Digg 4
  • Important for me though, is that they don’t just take your social graph as is, because that mixes many different people for different reasons, but that you can tweak the groups.
  • “This is the problem with the internet! It’s full of crap!” Many would argue that without professional producers, editors, publishers, and the natural scarcity that we became accustomed to, there’s a flood of low-quality material that we can’t possible sift through on our own. From blogs to music to software to journalism, one of the biggest fears of the established order is how to handle the oncoming glut of mediocrity. Who shall tell us The Good from The Bad? “We need gatekeepers, and they need to be paid!”
  • The Internet has enabled us to build our social graph, and in turn, that social graph acts as an aggregate gatekeeper. The better that these systems for crowdsourcing the curation of content become, the more accurate the results will be.
  • This social-graph-as-curation is still relatively new, even by Internet standards. However, with tools like Buzz and Digg 4 (which allows you to see the aggregate ratings for content based on your social graph, and not the whole wide world) this technique is catching up to human publishers fast. For those areas where we don’t have strong social ties, we can count on reputation systems to help us “rate the raters”. These systems allow strangers to rate each other’s content, giving users some idea of who to trust, without having to know them personally. Yelp has a fairly mature reputation system, where locations are rated by users, but the users are rated, in turn, by each other.
  • Reputation systems and the social graph allow us to crowdsource curation.
  • Can you imagine if Apple had to approve your videos for posting on Youtube, where every minute, 24 hours of footage are uploaded? There’s no way humans could keep up! The traditional forms of curation and gatekeeping simply can not scale to meet the increase in production and transmission that the Internet allows. Crowdsourcing is the only curatorial/editorial mechanism that can scale to match the increased ability to produce that the Internet has given us.
  •  
    Crowdsourced curation, reputation systems, and the social graph
Weiye Loh

In Wired Singapore Classrooms, Cultures Clash Over Web 2.0 - Technology - The Chronicle of Higher Education - 0 views

  • Dozens of freshmen at Singapore Management University spent one evening last week learning how to "wiki," or use the software that lets large numbers of people write and edit class projects online. Though many said experiencing a public editing process similar to that of Wikipedia could prove valuable, some were wary of the collaborative tool, with its public nature and the ability to toss out or revise the work of their classmates.
  • It puts students in the awkward position of having to publicly correct a peer, which can cause the corrected person to lose face.
  • "You have to be more aware of others and have a sensitivity to others."
  • ...8 more annotations...
  • While colleges have been trumpeting the power of social media as an educational tool, here in Asia, going public with classwork runs counter to many cultural norms, surprising transplanted professors and making some students a little uneasy.
  • Publicly oriented Web 2.0 tools, like wikis, for instance, run up against ideas about how one should treat others in public. "People were very reluctant to edit things that other people had posted," said American-trained C. Jason Woodard, an assistant professor of information systems who started the wiki project two years ago. "I guess out of deference. People were very careful to not want to edit their peers. Getting people out of that mind-set has been a real challenge."
  • Students are also afraid of embarrassing themselves. Some privately expressed concern to me about putting unfinished work out on the Web for the world to see, as the assignment calls for them to do
  • faced hesitancy when asking students to use social-media tools for class projects. Few students seemed to freely post to blogs or Twitter, electing instead to communicate using Facebook accounts with the privacy set so that only close friends could see them
  • In a small country like Singapore, the traditional face-to-face network still reigns supreme. Members of a network are extremely loyal to that network, and if you are outside of it, a lot of times you aren't even given the time of day.
  • In fact, Singapore's future depends on technology and innovation at least according to its leaders, who have worked for years to position the country as friendly to the foreign investment that serves as its lifeblood. The city-state literally has no natural resources except its people, who it hopes to turn into "knowledge workers" (a buzzword I heard many times during my visit).
  • Yet this is a culture that many here describe as conservative, where people are not known for pushing boundaries. That was the first impression that Giorgos Cheliotis had when he first arrived to teach in Singapore several years ago from his native Greece.
  • he suspects they may be more comfortable because they are seniors, and because they feel that it has been assigned, and so they must.
  •  
    In Wired Singapore Classrooms, Cultures Clash Over Web 2.0
Weiye Loh

Apples and PCs: Who innovates more, Apple or HP? | The Economist - 1 views

  • In terms of processing power, speed, memory, and so on, how do Macs and PCs actually compare? And does Apple innovate in terms of basic hardware quality as often or less often than the likes of HP, Compaq, and other producers? This question is of broader interest from an economist's point of view because it also has to do with the age-old question of whether competition or monopoly is a better spur to innovation. In a certain sense, Apple is a monopolist, and PC makers are in a more competitive market. (I say in a certain sense because obviously Macs and PCs are substitutes; it's just that they're more imperfect substitutes than two PCs are for each other, in part because of software migration issues.)
  • Schumpeter argued long back that because a monopolist reaps the full reward from innovation, such firms would be more innovative. The case for patents relies in part on a version of this argument: companies are given monopoly rights over a new product for a period of time in order for them to be able to recoup the costs of innovation; without such protection, it is argued, they would not find it beneficial to innovate in the first place.
  • others have argued that competition spurs innovation by giving firms a way to differentiate themselves from their competitors (in a way, creating something new gives a company a temporary, albeit brief, "monopoly")
  •  
    Who innovates more, Apple or HP?
Weiye Loh

Short Sharp Science: Computer beats human at Japanese chess for first time - 0 views

  • A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time.
  • computers have been beating humans at western chess for years, and when IBM's Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn't happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.
  • Japan's national broadcaster, NHK, reported that Akara "aggressively pursued Shimizu from the beginning". It's the first time a computer has beaten a professional human player.
  • ...2 more annotations...
  • The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu's defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.
  • Perhaps the association doesn't mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player.
  •  
    Computer beats human at Japanese chess for first time
Weiye Loh

The Data-Driven Life - NYTimes.com - 0 views

  • Humans make errors. We make errors of fact and errors of judgment. We have blind spots in our field of vision and gaps in our stream of attention.
  • These weaknesses put us at a disadvantage. We make decisions with partial information. We are forced to steer by guesswork. We go with our gut.
  • Others use data.
  • ...3 more annotations...
  • Others use data. A timer running on Robin Barooah’s computer tells him that he has been living in the United States for 8 years, 2 months and 10 days. At various times in his life, Barooah — a 38-year-old self-employed software designer from England who now lives in Oakland, Calif. — has also made careful records of his work, his sleep and his diet.
  • A few months ago, Barooah began to wean himself from coffee. His method was precise. He made a large cup of coffee and removed 20 milliliters weekly. This went on for more than four months, until barely a sip remained in the cup. He drank it and called himself cured. Unlike his previous attempts to quit, this time there were no headaches, no extreme cravings. Still, he was tempted, and on Oct. 12 last year, while distracted at his desk, he told himself that he could probably concentrate better if he had a cup. Coffee may have been bad for his health, he thought, but perhaps it was good for his concentration. Barooah wasn’t about to try to answer a question like this with guesswork. He had a good data set that showed how many minutes he spent each day in focused work. With this, he could do an objective analysis. Barooah made a chart with dates on the bottom and his work time along the side. Running down the middle was a big black line labeled “Stopped drinking coffee.” On the left side of the line, low spikes and narrow columns. On the right side, high spikes and thick columns. The data had delivered their verdict, and coffee lost.
  • “People have such very poor sense of time,” Barooah says, and without good time calibration, it is much harder to see the consequences of your actions. If you want to replace the vagaries of intuition with something more reliable, you first need to gather data. Once you know the facts, you can live by them.
Weiye Loh

Roger Pielke Jr.'s Blog: Core Questions in the Governance of Innovation - 0 views

  • Today's NYT has a couple interesting articles about technological innovations that we may not want, and that we may wish to regulate in some manner, formally or informally.  These technologies suggest some core questions that lie at the heart of the management of innovation.
  • The first article discusses Google' Goggles which is an application allows people to search the internet based on an image taken by a smartphone.  Google has decided not to allow this technology to include face recognition in its software, even though people have requested it.
  • Google could have put face recognition into the Goggles application; indeed, many users have asked for it. But Google decided against it because smartphones can be used to take pictures of individuals without their knowledge, and a face match could retrieve all kinds of personal information — name, occupation, address, workplace.
  • ...4 more annotations...
  • “It was just too sensitive, and we didn’t want to go there,” said Eric E. Schmidt, the chief executive of Google. “You want to avoid enabling stalker behavior.”
  • The second article focuses on innovations in high frequency trading in financial markets, which bears some responsibility for the so-called "flash crash" of May 6th last year, in which the DJIA plunged more than 700 points in just minutes.
  • One debate has focused on whether some traders are firing off fake orders thousands of times a second to slow down exchanges and mislead others. Michael Durbin, who helped build high-frequency trading systems for companies like Citadel and is the author of the book “All About High-Frequency Trading,” says that most of the industry is legitimate and benefits investors. But, he says, the rules need to be strengthened to curb some disturbing practices.
  • This situation raises what I see to be core questions in the governance of innovation -- to what degree can innovation be shaped for achieving intended purposes? and, To what degree can the consequences of innovation be anticipated?
Weiye Loh

IPhone and Android Apps Breach Privacy - WSJ.com - 0 views

  • Few devices know more personal details about people than the smartphones in their pockets: phone numbers, current location, often the owner's real name—even a unique ID number that can never be changed or turned off.
  • An examination of 101 popular smartphone "apps"—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone's unique device ID to other companies without users' awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent age, gender and other personal details to outsiders.
  • The findings reveal the intrusive effort by online-tracking companies to gather personal data about people in order to flesh out detailed dossiers on them.
  • ...24 more annotations...
  • iPhone apps transmitted more data than the apps on phones using Google Inc.'s Android operating system. Because of the test's size, it's not known if the pattern holds among the hundreds of thousands of apps available.
  • TextPlus 4, a popular iPhone app for text messaging. It sent the phone's unique ID number to eight ad companies and the phone's zip code, along with the user's age and gender, to two of them.
  • Pandora, a popular music app, sent age, gender, location and phone identifiers to various ad networks. iPhone and Android versions of a game called Paper Toss—players try to throw paper wads into a trash can—each sent the phone's ID number to at least five ad companies. Grindr, an iPhone app for meeting gay men, sent gender, location and phone ID to three ad companies.
  • iPhone maker Apple Inc. says it reviews each app before offering it to users. Both Apple and Google say they protect users by requiring apps to obtain permission before revealing certain kinds of information, such as location.
  • The Journal found that these rules can be skirted. One iPhone app, Pumpkin Maker (a pumpkin-carving game), transmits location to an ad network without asking permission. Apple declines to comment on whether the app violated its rules.
  • With few exceptions, app users can't "opt out" of phone tracking, as is possible, in limited form, on regular computers. On computers it is also possible to block or delete "cookies," which are tiny tracking files. These techniques generally don't work on cellphone apps.
  • makers of TextPlus 4, Pandora and Grindr say the data they pass on to outside firms isn't linked to an individual's name. Personal details such as age and gender are volunteered by users, they say. The maker of Pumpkin Maker says he didn't know Apple required apps to seek user approval before transmitting location. The maker of Paper Toss didn't respond to requests for comment.
  • Many apps don't offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn't provide privacy policies on their websites or inside the apps at the time of testing. Neither Apple nor Google requires app privacy policies.
  • the most widely shared detail was the unique ID number assigned to every phone.
  • On iPhones, this number is the "UDID," or Unique Device Identifier. Android IDs go by other names. These IDs are set by phone makers, carriers or makers of the operating system, and typically can't be blocked or deleted. "The great thing about mobile is you can't clear a UDID like you can a cookie," says Meghan O'Holleran of Traffic Marketplace, an Internet ad network that is expanding into mobile apps. "That's how we track everything."
  • O'Holleran says Traffic Marketplace, a unit of Epic Media Group, monitors smartphone users whenever it can. "We watch what apps you download, how frequently you use them, how much time you spend on them, how deep into the app you go," she says. She says the data is aggregated and not linked to an individual.
  • Apple and Google ad networks let advertisers target groups of users. Both companies say they don't track individuals based on the way they use apps.
  • Apple limits what can be installed on an iPhone by requiring iPhone apps to be offered exclusively through its App Store. Apple reviews those apps for function, offensiveness and other criteria.
  • Apple says iPhone apps "cannot transmit data about a user without obtaining the user's prior permission and providing the user with access to information about how and where the data will be used." Many apps tested by the Journal appeared to violate that rule, by sending a user's location to ad networks, without informing users. Apple declines to discuss how it interprets or enforces the policy.
  • Google doesn't review the apps, which can be downloaded from many vendors. Google says app makers "bear the responsibility for how they handle user information." Google requires Android apps to notify users, before they download the app, of the data sources the app intends to access. Possible sources include the phone's camera, memory, contact list, and more than 100 others. If users don't like what a particular app wants to access, they can choose not to install the app, Google says.
  • Neither Apple nor Google requires apps to ask permission to access some forms of the device ID, or to send it to outsiders. When smartphone users let an app see their location, apps generally don't disclose if they will pass the location to ad companies.
  • Lack of standard practices means different companies treat the same information differently. For example, Apple says that, internally, it treats the iPhone's UDID as "personally identifiable information." That's because, Apple says, it can be combined with other personal details about people—such as names or email addresses—that Apple has via the App Store or its iTunes music services. By contrast, Google and most app makers don't consider device IDs to be identifying information.
  • A growing industry is assembling this data into profiles of cellphone users. Mobclix, the ad exchange, matches more than 25 ad networks with some 15,000 apps seeking advertisers. The Palo Alto, Calif., company collects phone IDs, encodes them (to obscure the number), and assigns them to interest categories based on what apps people download and how much time they spend using an app, among other factors. By tracking a phone's location, Mobclix also makes a "best guess" of where a person lives, says Mr. Gurbuxani, the Mobclix executive. Mobclix then matches that location with spending and demographic data from Nielsen Co.
  • Mobclix can place a user in one of 150 "segments" it offers to advertisers, from "green enthusiasts" to "soccer moms." For example, "die hard gamers" are 15-to-25-year-old males with more than 20 apps on their phones who use an app for more than 20 minutes at a time. Mobclix says its system is powerful, but that its categories are broad enough to not identify individuals. "It's about how you track people better," Mr. Gurbuxani says.
  • four app makers posted privacy policies after being contacted by the Journal, including Rovio Mobile Ltd., the Finnish company behind the popular game Angry Birds (in which birds battle egg-snatching pigs). A spokesman says Rovio had been working on the policy, and the Journal inquiry made it a good time to unveil it.
  • Free and paid versions of Angry Birds were tested on an iPhone. The apps sent the phone's UDID and location to the Chillingo unit of Electronic Arts Inc., which markets the games. Chillingo says it doesn't use the information for advertising and doesn't share it with outsiders.
  • Some developers feel pressure to release more data about people. Max Binshtok, creator of the DailyHoroscope Android app, says ad-network executives encouraged him to transmit users' locations. Mr. Binshtok says he declined because of privacy concerns. But ads targeted by location bring in two to five times as much money as untargeted ads, Mr. Binshtok says. "We are losing a lot of revenue."
  • Apple targets ads to phone users based largely on what it knows about them through its App Store and iTunes music service. The targeting criteria can include the types of songs, videos and apps a person downloads, according to an Apple ad presentation reviewed by the Journal. The presentation named 103 targeting categories, including: karaoke, Christian/gospel music, anime, business news, health apps, games and horror movies. People familiar with iAd say Apple doesn't track what users do inside apps and offers advertisers broad categories of people, not specific individuals. Apple has signaled that it has ideas for targeting people more closely. In a patent application filed this past May, Apple outlined a system for placing and pricing ads based on a person's "web history or search history" and "the contents of a media library." For example, home-improvement advertisers might pay more to reach a person who downloaded do-it-yourself TV shows, the document says.
  • The patent application also lists another possible way to target people with ads: the contents of a friend's media library. How would Apple learn who a cellphone user's friends are, and what kinds of media they prefer? The patent says Apple could tap "known connections on one or more social-networking websites" or "publicly available information or private databases describing purchasing decisions, brand preferences," and other data. In September, Apple introduced a social-networking service within iTunes, called Ping, that lets users share music preferences with friends. Apple declined to comment.
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The Guardian - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

'Scrapers' Dig Deep for Data on the Web - WSJ.com - 0 views

  • website PatientsLikeMe.com noticed suspicious activity on its "Mood" discussion board. There, people exchange highly personal stories about their emotional disorders, ranging from bipolar disease to a desire to cut themselves. It was a break-in. A new member of the site, using sophisticated software, was "scraping," or copying, every single message off PatientsLikeMe's private online forums.
  • PatientsLikeMe managed to block and identify the intruder: Nielsen Co., the privately held New York media-research firm. Nielsen monitors online "buzz" for clients, including major drug makers, which buy data gleaned from the Web to get insight from consumers about their products, Nielsen says.
  • The market for personal data about Internet users is booming, and in the vanguard is the practice of "scraping." Firms offer to harvest online conversations and collect personal details from social-networking sites, résumé sites and online forums where people might discuss their lives. The emerging business of web scraping provides some of the raw material for a rapidly expanding data economy. Marketers spent $7.8 billion on online and offline data in 2009, according to the New York management consulting firm Winterberry Group LLC. Spending on data from online sources is set to more than double, to $840 million in 2012 from $410 million in 2009.
  • ...6 more annotations...
  • The Wall Street Journal's examination of scraping—a trade that involves personal information as well as many other types of data—is part of the newspaper's investigation into the business of tracking people's activities online and selling details about their behavior and personal interests.
  • Some companies collect personal information for detailed background reports on individuals, such as email addresses, cell numbers, photographs and posts on social-network sites. Others offer what are known as listening services, which monitor in real time hundreds or thousands of news sources, blogs and websites to see what people are saying about specific products or topics.
  • One such service is offered by Dow Jones & Co., publisher of the Journal. Dow Jones collects data from the Web—which may include personal information contained in news articles and blog postings—that help corporate clients monitor how they are portrayed. It says it doesn't gather information from password-protected parts of sites.
  • The competition for data is fierce. PatientsLikeMe also sells data about its users. PatientsLikeMe says the data it sells is anonymized, no names attached.
  • Nielsen spokesman Matt Anchin says the company's reports to its clients include publicly available information gleaned from the Internet, "so if someone decides to share personally identifiable information, it could be included."
  • Internet users often have little recourse if personally identifiable data is scraped: There is no national law requiring data companies to let people remove or change information about themselves, though some firms let users remove their profiles under certain circumstances.
  •  
    he market for personal data about Internet users is booming, and in the vanguard is the practice of "scraping." Firms offer to harvest online conversations and collect personal details from social-networking sites, résumé sites and online forums where people might discuss their lives.
Weiye Loh

Land Destroyer: Alternative Economics - 0 views

  • Peer to peer file sharing (P2P) has made media distribution free and has become the bane of media monopolies. P2P file sharing means digital files can be copied and distributed at no cost. CD's, DVD's, and other older forms of holding media are no longer necessary, nor is the cost involved in making them or distributing them along a traditional logistical supply chain. Disc burners, however, allow users the ability to create their own physical copies at a fraction of the cost of buying the media from the stores. Supply and demand is turned on its head as the more popular a certain file becomes via demand, the more of it that is available for sharing, and the easier it is to obtain. Supply and demand increase in tandem towards a lower "price" of obtaining the said file.Consumers demand more as price decreases. Producersnaturally want to produce more of something as priceincreases. Somewhere in between consumers and producers meet at the market price or "marketequilibrium."P2P technology eliminates material scarcity, thus the more afile is in demand, the more people end up downloading it, andthe easier it is for others to find it and download it. Considerthe implications this would have if technology made physicalobjects as easy to "share" as information is now.
  • In the end, it is not government regulations, legal contrivances, or licenses that govern information, but rather the free market mechanism commonly referred to as Adam Smith's self regulating "Invisible Hand of the Market." In other words, people selfishly seeking accurate information for their own benefit encourage producers to provide the best possible information to meet their demand. While this is not possible in a monopoly, particularly the corporate media monopoly of the "left/right paradigm" of false choice, it is inevitable in the field of real competition that now exists online due to information technology.
  • Compounding the establishment's troubles are cheaper cameras and cheaper, more capable software for 3D graphics, editing, mixing, and other post production tasks, allowing for the creation of an alternative publishing, audio and video industry. "Underground" counter-corporate music and film has been around for a long time but through the combination of technology and the zealous corporate lawyers disenfranchising a whole new generation that now seeks an alternative, it is truly coming of age.
  • ...3 more annotations...
  • With a growing community of people determined to become collaborative producers rather than fit into the producer/consumer paradigm, and 3D files for physical objects already being shared like movies and music, the implications are profound. Products, and the manufacturing technology used to make them will continue to drop in price, become easier to make for individuals rather than large corporations, just as media is now shifting into the hands of the common people. And like the shift of information, industry will move from the elite and their agenda of preserving their power, to the end of empowering the people.
  • In a future alternative economy where everyone is a collaborative designer, producer, and manufacturer instead of passive consumers and when problems like "global climate change," "overpopulation," and "fuel crises" cross our path, we will counter them with technical solutions, not political indulgences like carbon taxes, and not draconian decrees like "one-child policies."
  • We will become the literal architects of our own future in this "personal manufacturing" revolution. While these technologies may still appear primitive, or somewhat "useless" or "impractical" we must remember where our personal computers stood on the eve of the dawning of the information age and how quickly they changed our lives. And while many of us may be unaware of this unfolding revolution, you can bet the globalists, power brokers, and all those that stand to lose from it not only see it but are already actively fighting against it.Understandably it takes some technical know-how to jump into the personal manufacturing revolution. In part 2 of "Alternative Economics" we will explore real world "low-tech" solutions to becoming self-sufficient, local, and rediscover the empowerment granted by doing so.
‹ Previous 21 - 40 of 51 Next ›
Showing 20 items per page