Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged number

Rss Feed Group items tagged

Weiye Loh

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • Even if individual researchers are prone to falling in love with their own theories, the broader processes of peer review and institutionalized skepticism are designed to ensure that, eventually, the best ideas prevail.
  • Modern science originated from an attempt to weed out such subjective lapses
  • Our individual responses to the conclusions that science reaches, however, are quite another matter. Ironically, in part because researchers employ so much nuance and strive to disclose all remaining sources of uncertainty, scientific evidence is highly susceptible to selective reading and misinterpretation.
  • ...5 more annotations...
  • a large number of psychological studies have shown that people respond to scientific or technical evidence in ways that justify their preexisting beliefs.
  • In a classic 1979 experiment (PDF), pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies: one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological critiques of the fake studies—and in a scientific sense, neither study was stronger than the other. Yet in each case, advocates more heavily criticized the study whose conclusions disagreed with their own, while describing the study that was more ideologically congenial as more "convincing."
  • According to research by Yale Law School professor Dan Kahan and his colleagues, people's deep-seated views about morality, and about the way society should be ordered, strongly predict whom they consider to be a legitimate scientific expert in the first place—and thus where they consider "scientific consensus" to lie on contested issues.
  • people rejected the validity of a scientific source because its conclusion contradicted their deeply held views—and thus the relative risks inherent in each scenario.
  • When political scientists Brendan Nyhan and Jason Reifler showed subjects fake newspaper articles (PDF) in which this was first suggested (in a 2004 quote from President Bush) and then refuted (with the findings of the Bush-commissioned Iraq Survey Group report, which found no evidence of active WMD programs in pre-invasion Iraq), they found that conservatives were more likely than before to believe the claim.
Weiye Loh

Do peer reviewers get worse with experience? Plus a poll « Retraction Watch - 0 views

  • We’re not here to defend peer review against its many critics. We have the same feelings about it that Churchill did about democracy, aka the worst form of government except for all those others that have been tried. Of course, a good number of the retractions we write about are due to misconduct, and it’s not clear how peer review, no matter how good, would detect out-and-out fraud.
  • With that in mind, a paper published last week in the Annals of Emergency Medicine caught our eye. Over 14 years, 84 editors at the journal rated close to 15,000 reviews by about 1,500 reviewers. Highlights of their findings: …92% of peer reviewers deteriorated during 14 years of study in the quality and usefulness of their reviews (as judged by editors at the time of decision), at rates unrelated to the length of their service (but moderately correlated with their mean quality score, with better-than average reviewers decreasing at about half the rate of those below average). Only 8% improved, and those by very small amount.
  • The average reviewer in our study would have taken 12.5 years to reach this threshold; only 3% of reviewers whose quality decreased would have reached it in less than 5 years, and even the worst would take 3.2 years. Another 35% of all reviewers would reach the threshold in 5 to 10 years, 28% in 10 to 15 years, 12% in 15 to 20 years, and 22% in 20 years or more. So the decline was slow. Still, the results, note the authors, were surprising: Such a negative overall trend is contrary to most editors’ and reviewers’ intuitive expectations and beliefs about reviewer skills and the benefits of experience.
  • ...4 more annotations...
  • What could account for this decline? The study’s authors say it might be the same sort of decline you generally see as people get older. This is well-documented in doctors, so why shouldn’t it be true of doctors — and others — who peer review?
  • Other than the well-documented cognitive decline of humans as they age, there are other important possible causes of deterioration of performance that may play a role among scientific reviewers. Examples include premature closure of decisionmaking, less compliance with formal structural review requirements, and decay of knowledge base with time (ie, with aging more of the original knowledge base acquired in training becomes out of date). Most peer reviewers say their reviews have changed with experience, becoming shorter and focusing more on methods and larger issues; only 25% think they have improved.
  • Decreased cognitive performance capability may not be the only or even chief explanation. Competing career activities and loss of motivation as tasks become too familiar may contribute as well, by decreasing the time and effort spent on the task. Some research has concluded that the decreased productivity of scientists as they age is due not to different attributes or access to resources but to “investment motivation.” This is another way of saying that competition for the reviewer’s time (which is usually uncompensated) increases with seniority, as they develop (more enticing) opportunities for additional peer review, research, administrative, and leadership responsibilities and rewards. However, from the standpoint of editors and authors (or patients), whether the cause of the decrease is decreasing intrinsic cognitive ability or diminished motivation and effort does not matter. The result is the same: a less rigorous review by which to judge articles
  • What can be done? The authors recommend “deliberate practice,” which involves assessing one’s skills, accurately identifying areas of relative weakness, performing specific exercises designed to improve and extend those weaker skills, and investing high levels of concentration and hundreds or thousands of hours in the process. A key component of deliberate practice is immediate feedback on one’s performance. There’s a problem: But acting on prompt feedback (to guide deliberate practice) would be almost impossible for peer reviewers, who typically get no feedback (and qualitative research reveals this is one of their chief complaints).
  •  
    92% of peer reviewers deteriorated during 14 years of study in the quality and usefulness of their reviews (as judged by editors at the time of decision), at rates unrelated to the length of their service (but moderately corre
Weiye Loh

Android phones record user-locations according to research | Technology | The Guardian - 0 views

  • The discovery that Android devices - which are quickly becoming the best-selling products in the smartphone space - also collect location data indicates how essential such information has become to their effective operation. "Location services", which can help place a user on a map, are increasingly seen as important for providing enhanced services including advertising - which forms the basis of Google's business.
  • Smartphones running Google's Android software collect data about the user's movements in almost exactly the same way as the iPhone, according to an examination of files they contain. The discovery, made by a Swedish researcher, comes as the Democratic senator Al Franken has written to Apple's chief executive Steve Jobs demanding to know why iPhones keep a secret file recording the location of their users as they move around, as the Guardian revealed this week.
  • Magnus Eriksson, a Swedish programmer, has shown that Android phones – now the bestselling smartphones – do the same, though for a shorter period. According to files discovered by Android devices keep a record of the locations and unique IDs of the last 50 mobile masts that it has communicated with, and the last 200 Wi-Fi networks that it has "seen". These are overwritten, oldest first, when the relevant list is full. It is not yet known whether the lists are sent to Google. That differs from Apple, where the data is stored for up to a year.
  • ...4 more annotations...
  • In addition, the file is not easily accessible to users: it requires some computer skills to extract the data. By contrast, the Apple file is easily extracted directly from the computer or phone.
  • Senator Franken has asked Jobs to explain the purpose and extent of the iPhone's tracking. "The existence of this information - stored in an unencrypted format - raises serious privacy concerns," Franken writes in his letter to Jobs. "Anyone who gains access to this single file could likely determine the location of a user's home, the businesses he frequents, the doctors he visits, the schools his children attend, and the trips he has taken - over the past months or even a year."
  • Franken points out that a stolen or lost iPhone or iPad could be used to map out its owner's precise movements "for months at a time" and that it is not limited by age, meaning that it could track the movements of users who are under 13
  • security researcher, Alex Levinson, says that he discovered the file inside the iPhone last year, and that it has been used in the US by the police in a number of cases. He says that its purpose is simply to help the phone determine its location, and that he has seen no evidence that it is sent back to Apple. However documents lodged by Apple with the US Congress suggest that it does use the data if the user agrees to give the company "diagnostic information" from their iPhone or iPad.
Weiye Loh

My conversation with the TNP editor. - 0 views

  • 1. One cannot criticise if one has not read the articleMr Singh said that between the reporter (Ms Sim) and himself, they had received 7 email complaints, with many more criticisms online. However, he said that most of these complaints had come from people who were reacting solely to the front page without reading the article. Mr Singh said that he had expected most intelligent and educated Singaporeans to have read the article before jumping the gun to judge TNP for their article.
  • 2. "Are Singaporeans ready for a gay MP?" was the angle TNP chose to take because they thought it was an important issue concerning votersEven though the PAP said that Dr Wijeysingha's sexual orientation was not an issue for them, TNP felt that it was an issue for Singaporean voters. They therefore went out to poll Singaporeans about whether they were ready for a gay MP. 76% of the Singaporeans polled said that they would be fine with a gay MP. This, Mr Singh said, actually helps SDP more than the PAP, and therefore he felt that it was quite "ballsy" of TNP to have taken this angle. However, TNP only polled approx. 130 (I forget the real number) people and so it would not have been statistically correct for the headline to say "Singaporeans are ready for a gay MP". (This was in response to my question about why the headline could not have reflected the poll.) He also said that TNP decided to do a poll about lowering the age of consent in Singapore because it was an issue raised in the video (albeit not by Dr Wijeysingha) and they felt that it was relevant to Singaporeans. 
  • 3. It is only a smear campaign if what the PAP say about Vincent Wijeysingha is untrue Mr Singh said that it would only be accurate to say that the PAP has launched a "smear campaign" against Dr Wijeysingha if what they are saying is untrue. However, what the PAP has said is true, and so it cannot be labelled a "smear campaign".He said that he had asked the SDP if they had attempted to suppress the video in question, and that the SDP said yes. So the PAP hadn't been "smearing" the SDP by implying they were trying to suppress the video, because they were
  • ...1 more annotation...
  • 5. Don't add "fuel to the flame" Mr Singh felt that my complaint to TNP was simply adding "fuel to the flame", leading to more people to prejudge the article. I asked if TNP would now proceed to cover another angle of the story, picking up on the strong reactions online questioning Dr Balakrishnan. Mr Singh said that they would not, as they did not wish to add more "fuel to the flame".
Weiye Loh

Rationally Speaking: A pluralist approach to ethics - 0 views

  • The history of Western moral philosophy includes numerous attempts to ground ethics in one rational principle, standard, or rule. This narrative stretches back 2,500 years to the Greeks, who were interested mainly in virtue ethics and the moral character of the person. The modern era has seen two major additions. In 1785, Immanuel Kant introduced the categorical imperative: act only under the assumption that what you do could be made into a universal law. And in 1789, Jeremy Bentham proposed utilitarianism: work toward the greatest happiness of the greatest number of people (the “utility” principle).
  • Many people now think projects to build a reasonable and coherent moral system are doomed. Still, most secular and religious people reject the alternative of moral relativism, and have spent much ink criticizing it (among my favorite books on the topic is Moral Relativism by Stephen Lukes). The most recent and controversial work in this area comes from Sam Harris. In The Moral Landscape, Harris argues for a morality based on (a science of) well-being and flourishing, rather than religious dogma.
  • I am interested in another oft-heard criticism of Harris’ book, which is that words like “well-being” and “flourishing” are too general to form any relevant basis for morality. This criticism has some force to it, as these certainly are somewhat vague terms. But what if “well-being” and “flourishing” were to be used only as a starting point for a moral framework? These concepts would still put us on a better grounding than religious faith. But they cannot stand alone. Nor do they need to.
  • ...4 more annotations...
  • 1. The harm principle bases our ethical considerations on other beings’ capacity for higher-level subjective experience. Human beings (and some animals) have the potential — and desire — to experience deep pleasure and happiness while seeking to avoid pain and suffering. We have the obligation, then, to afford creatures with these capacities, desires and relations a certain level of respect. They also have other emotional and social interests: for instance, friends and families concerned with their health and enjoyment. These actors also deserve consideration.
  • 2. If we have a moral obligation to act a certain way toward someone, that should be reflected in law. Rights theory is the idea that there are certain rights worth granting to people with very few, if any, caveats. Many of these rights were spelled out in the founding documents of this country, the Declaration of Independence (which admittedly has no legal pull) and the Constitution (which does). They have been defended in a long history of U.S. Supreme Court rulings. They have also been expanded on in the U.N.’s 1948 Universal Declaration of Human Rights and in the founding documents of other countries around the world. To name a few, they include: freedom of belief, speech and expression, due process, equal treatment, health care, and education.
  • 3. While we ought to consider our broader moral efforts, and focus on our obligations to others, it is also important to place attention on our quality as moral agents. A vital part of fostering a respectable pluralist moral framework is to encourage virtues, and cultivate moral character. A short list of these virtues would include prudence, justice, wisdom, honesty, compassion, and courage. One should study these, and strive to put these into practice and work to be a better human being, as Aristotle advised us to do.
  • most people already are ethical pluralists. Life and society are complex to navigate, and one cannot rely on a single idea for guidance. It is probably accurate to say that people lean more toward one theory, rather than practice it to the exclusion of all others. Of course, this only describes the fact that people think about morality in a pluralistic way. But the outlined approach is supported, sound reasoning — that is, unless you are ready to entirely dismiss 2,500 years of Western moral philosophy.
  •  
    while each ethical system discussed so far has its shortcomings, put together they form a solid possibility. One system might not be able to do the job required, but we can assemble a mature moral outlook containing parts drawn from different systems put forth by philosophers over the centuries (plus some biology, but that's Massimo's area). The following is a rough sketch of what I think a decent pluralist approach to ethics might look like.
Weiye Loh

Google's War on Nonsense - NYTimes.com - 0 views

  • As a verbal artifact, farmed content exhibits neither style nor substance.
  • The insultingly vacuous and frankly bizarre prose of the content farms — it seems ripped from Wikipedia and translated from the Romanian — cheapens all online information.
  • These prose-widgets are not hammered out by robots, surprisingly. But they are written by writers who work like robots. As recent accounts of life in these words-are-money mills make clear, some content-farm writers have deadlines as frequently as every 25 minutes. Others are expected to turn around reported pieces, containing interviews with several experts, in an hour. Some compose, edit, format and publish 10 articles in a single shift. Many with decades of experience in journalism work 70-hour weeks for salaries of $40,000 with no vacation time. The content farms have taken journalism hackwork to a whole new level.
  • ...6 more annotations...
  • So who produces all this bulk jive? Business Insider, the business-news site, has provided a forum to a half dozen low-paid content farmers, especially several who work at AOL’s enormous Seed and Patch ventures. They describe exhausting and sometimes exploitative writing conditions. Oliver Miller, a journalist with an MFA in fiction from Sarah Lawrence who once believed he’d write the Great American Novel, told me AOL paid him about $28,000 for writing 300,000 words about television, all based on fragments of shows he’d never seen, filed in half-hour intervals, on a graveyard shift that ran from 11 p.m. to 7 or 8 in the morning.
  • Mr. Miller’s job, as he made clear in an article last week in The Faster Times, an online newspaper, was to cram together words that someone’s research had suggested might be in demand on Google, position these strings as titles and headlines, embellish them with other inoffensive words and make the whole confection vaguely resemble an article. AOL would put “Rick Fox mustache” in a headline, betting that some number of people would put “Rick Fox mustache” into Google, and retrieve Mr. Miller’s article. Readers coming to AOL, expecting information, might discover a subliterate wasteland. But before bouncing out, they might watch a video clip with ads on it. Their visits would also register as page views, which AOL could then sell to advertisers.
  • commodify writing: you pay little or nothing to writers, and make readers pay a lot — in the form of their “eyeballs.” But readers get zero back, no useful content.
  • You can’t mess with Google forever. In February, the corporation concocted what it concocts best: an algorithm. The algorithm, called Panda, affects some 12 percent of searches, and it has — slowly and imperfectly — been improving things. Just a short time ago, the Web seemed ungovernable; bad content was driving out good. But Google asserted itself, and credit is due: Panda represents good cyber-governance. It has allowed Google to send untrustworthy, repetitive and unsatisfying content to the back of the class. No more A’s for cheaters.
  • the goal, according to Amit Singhal and Matt Cutts, who worked on Panda, is to “provide better rankings for high-quality sites — sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.”
  • Google officially rolled out Panda 2.2. Put “Whitey Bulger” into Google, and where you might once have found dozens of content farms, today you get links to useful articles from sites ranging from The Boston Globe, The Los Angeles Times, the F.B.I. and even Mashable, doing original analysis of how federal agents used social media to find Bulger. Last month, Demand Media, once the most notorious of the content farms, announced plans to improve quality by publishing more feature articles by hired writers, and fewer by “users” — code for unpaid freelancers. Amazing. Demand Media is stepping up its game.
  •  
    Content farms, which have flourished on the Web in the past 18 months, are massive news sites that use headlines, keywords and other tricks to lure Web-users into looking at ads. These sites confound and embarrass Google by gaming its ranking system. As a business proposition, they once seemed exciting. Last year, The Economist admiringly described Associated Content and Demand Media as cleverly cynical operations that "aim to produce content at a price so low that even meager advertising revenue can support it."
Weiye Loh

Fukushima babies and how numbers can lie - Boing Boing - 0 views

  • Over at Scientific American, Michael Moyer takes a critical look at an Al Jazeera story about a recent study purporting to show that infant deaths on the American West Coast increased by 35% as a result of fallout from the Fukushima Daiichi nuclear power plant meltdown.
  • At first glance, the story looks credible. And scary. The information comes from a physician, Janette Sherman MD, and epidemiologist Joseph Mangano, who got their data from the Centers for Disease Control and Prevention's Morbidity and Mortality Weekly Reports—a newsletter that frequently helps public health officials spot trends in death and illness.
  • Look closer, though, and the credibility vanishes. For one thing, this isn't a formal scientific study and Sherman and Mangano didn't publish their findings in a peer-reviewed journal, or even on a science blog. Instead, all of this comes from an essay the two wrote for Counter Punch, a political newsletter.
  • ...2 more annotations...
  • Let's first consider the data that the authors left out of their analysis. It's hard to understand why the authors stopped at these eight cities. Why include Boise but not Tacoma? Or Spokane? Both have about the same size population as Boise, they're closer to Japan, and the CDC includes data from Tacoma and Spokane in the weekly reports.
  • More important, why did the authors choose to use only the four weeks preceding the Fukushima disaster? Here is where we begin to pick up a whiff of data fixing. ... While it certainly is true that there were fewer deaths in the four weeks leading up to Fukushima than there have been in the 10 weeks following, the entire year has seen no overall trend. When I plotted a best-fit line to the data, Excel calculated a very slight decrease in the infant mortality rate. Only by explicitly excluding data from January and February were Sherman and Mangano able to froth up their specious statistical scaremongering.
  •  
    When you think about what information be skeptical of, that decision can't begin and end with "corporate interests." Yes, those sources often give you bad information. But bad information comes from other places, too. The Fukushima accident was worse than TEPCO wanted people to believe when it first happened. Radiation isn't healthy for you, and there are people (plant workers, emergency crews, people who lived nearby) who will be dealing with the effects of Fukushima for years to come. But the fact that all of that is true does not mean that we should uncritically accept it when somebody says that radiation from Fukushima is killing babies in the United States. Just because the corporate interests are in the wrong doesn't mean that every claim against them is true.
Weiye Loh

Turning Privacy "Threats" Into Opportunities - Esther Dyson - Project Syndicate - 0 views

  • ost disclosure statements are not designed to be read; they are designed to be clicked on. But some companies actually want their customers to read and understand the statements. They don’t want customers who might sue, and, just in case, they want to be able to prove that the customers did understand the risks. So the leaders in disclosure statements right now tend to be financial and health-care companies – and also space-travel and extreme-sports vendors. They sincerely want to let their customers know what they are getting into, because a regretful customer is a vengeful one. That means making disclosure statements readable. I would suggest turning them into a quiz. The user would not simply click a single button, but would have to select the right button for each question. For example: What are my chances of dying in space? A) 5% B) 30% C) 1-4% (the correct answer, based on experience so far; current spacecraft are believed to be safer.) Now imagine: Who can see my data? A) I can. B) XYZ Corporation. C) XYZ Corporation’s marketing partners. (Click here to see the list.) D) XYZ Corporation’s affiliates and anyone it chooses. As the customer picks answers, she gets a good idea of what is going on. In fact, if you're a marketer, why not dispense with a single right answer and let the consumer specify what she wants to have happen with her data (and corresponding privileges/access rights if necessary)? That’s much more useful than vague policy statements. Suddenly, the disclosure statement becomes a consumer application that adds value to the vendor-consumer relationship.
  • And show the data themselves rather than a description.
  • this is all very easy if you are the site with which the user communicates directly; it is more difficult if you are in the background, a third party collecting information surreptitiously. But that practice should be stopped, anyway.
  • ...4 more annotations...
  • just as they have with Facebook, users will become more familiar with the idea of setting their own privacy preferences and managing their own data. Smart vendors will learn from Facebook; the rest will lose out to competitors. Visualizing the user's information and providing an intelligible interface is an opportunity for competitive advantage.
  • I see this happening already with a number of companies, including some with which I am involved. For example, in its research surveys, 23andMe asks people questions such as how often they have headaches or whether they have ever been exposed to pesticides, and lets them see (in percentages) how other 23andMe users answer the question. This kind of information is fascinating to most people. TripIt lets you compare and match your own travel plans with those of friends. Earndit lets you compete with others to exercise more and win points and prizes.
  • Consumers increasingly expect to be able to see themselves both as individuals and in context. They will feel more comfortable about sharing data if they feel confident that they know what is shared and what is not. The online world will feel like a well-lighted place with shops, newsstands, and the like, where you can see other people and they can see you. Right now, it more often feels like lurking in a spooky alley with a surveillance camera overlooking the scene.
  • Of course, there will be “useful” data that an individual might not want to share – say, how much alcohol they buy, which diseases they have, or certain of their online searches. They will know how to keep such information discreet, just as they might close the curtains to get undressed in their hotel room after enjoying the view from the balcony. Yes, living online takes a little more thought than living offline. But it is not quite as complex once Internet-based services provide the right tools – and once awareness and control of one’s own data become a habit.
  •  
    companies see consumer data as something that they can use to target ads or offers, or perhaps that they can sell to third parties, but not as something that consumers themselves might want. Of course, this is not an entirely new idea, but most pundits on both sides - privacy advocates and marketers - don't realize that rather than protecting consumers or hiding from them, companies should be bringing them into the game. I believe that successful companies will turn personal data into an asset by giving it back to their customers in an enhanced form. I am not sure exactly how this will happen, but current players will either join this revolution or lose out.
funeral adelaide

Top Funeral Service in Adelaide - 1 views

Sensible Funerals handled the funeral of my late grandmother. It was really difficult for me and my family to say goodbye. That is why we wanted to give her the best funeral service though our fam...

Funeral Adelaide

started by funeral adelaide on 18 Oct 11 no follow-up yet
« First ‹ Previous 161 - 169 of 169
Showing 20 items per page