Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged new

Rss Feed Group items tagged

Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
Weiye Loh

Adventures in Flay-land: Dealing with Denialists - Delingpole Part III - 0 views

  • This post is about how one should deal with a denialist of Delingpole's ilk.
  • I saw someone I follow on Twitter retweet an update from another Twitter user called @AGW_IS_A_HOAX, which was this: "NZ #Climate Scientists Admit Faking Temperatures http://bit.ly/fHbdPI RT @admrich #AGW #Climategate #Cop16 #ClimateChange #GlobalWarming".
  • So I click on it. And this is how you deal with a denialist claim. You actually look into it. Here is the text of that article reproduced in full: New Zealand Climate Scientists Admit To Faking Temperatures: The Actual Temps Show Little Warming Over Last 50 YearsRead here and here. Climate "scientists" across the world have been blatantly fabricating temperatures in hopes of convincing the public and politicians that modern global warming is unprecedented and accelerating. The scientists doing the fabrication are usually employed by the government agencies or universities, which thrive and exist on taxpayer research dollars dedicated to global warming research. A classic example of this is the New Zealand climate agency, which is now admitting their scientists produced bogus "warming" temperatures for New Zealand. "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century. For all their talk about warming, for all their rushed invention of the “Eleven-Station Series” to prove warming, this new series shows that no warming has occurred here since about 1960. Almost all the warming took place from 1940-60, when the IPCC says that the effect of CO2 concentrations was trivial. Indeed, global temperatures were falling during that period.....Almost all of the 34 adjustments made by Dr Jim Salinger to the 7SS have been abandoned, along with his version of the comparative station methodology."A collection of temperature-fabrication charts.
  • ...10 more annotations...
  • I check out the first link, the first "here" where the article says "Read here and here". I can see that there's been some sort of dispute between two New Zealand groups associated with climate change. One is New Zealand’s Climate Science Coalition (NZCSC) and the other is New Zealand’s National Institute of Water and Atmospheric Research (NIWA), but it doesn't tell me a whole lot more than I already got from the other article.
  • I check the second source behind that article. The second article, I now realize, is published on the website of a person called Andrew Montford with whom I've been speaking recently and who is the author of a book titled The Hockey Stick Illusion. I would not label Andrew a denialist. He makes some good points and seems to be a decent guy and geniune sceptic (This is not to suggest all denialists are outwardly dishonest; however, they do tend to be hard to reason with). Again, this article doesn't give me anything that I haven't already seen, except a link to another background source. I go there.
  • From this piece written up on Scoop NZNEWSUK I discover that a coalition group consisting of the NZCSC and the Climate Conversation Group (CCG) has pressured the NIWA into abandoning a set of temperature record adjustments of which the coalition dispute the validity. This was the culmination of a court proceeding in December 2010, last month. In dispute were 34 adjustments that had been made by Dr Jim Salinger to the 7SS temperature series, though I don't know what that is exactly. I also discover that there is a guy called Richard Treadgold, Convenor of the CCG, who is quoted several times. Some of the statements he makes are quoted in the articles I've already seen. They are of a somewhat snide tenor. The CSC object to the methodology used by the NIWA to adjust temperature measurements (one developed as part of a PhD thesis), which they critique in a paper in November 2009 with the title "Are we feeling warmer yet?", and are concerned about how this public agency is spending its money. I'm going to have to dig a bit deeper if I want to find out more. There is a section with links under the heading "Related Stories on Scoop". I click on a few of those.
  • One of these leads me to more. Of particular interest is a fairly neutral article outlining the progress of the court action. I get some more background: For the last ten years, visitors to NIWA’s official website have been greeted by a graph of the “seven-station series” (7SS), under the bold heading “New Zealand Temperature Record”. The graph covers the period from 1853 to the present, and is adorned by a prominent trend-line sloping sharply upwards. Accompanying text informs the world that “New Zealand has experienced a warming trend of approximately 0.9°C over the past 100 years.” The 7SS has been updated and used in every monthly issue of NIWA’s “Climate Digest” since January 1993. Its 0.9°C (sometimes 1.0°C) of warming has appeared in the Australia/NZ Chapter of the IPCC’s 2001 and 2007 Assessment Reports. It has been offered as sworn evidence in countless tribunals and judicial enquiries, and provides the historical base for all of NIWA’s reports to both Central and Local Governments on climate science issues and future projections.
  • now I can see why this is so important. The temperature record informs the conclusions of the IPCC assessment reports and provides crucial evidence for global warming.
  • Further down we get: NIWA announces that it has now completed a full internal examination of the Salinger adjustments in the 7SS, and has forwarded its “review papers” to its Australian counterpart, the Bureau of Meteorology (BOM) for peer review.and: So the old 7SS has already been repudiated. A replacement NZTR [New Zealand Temperature Record] is being prepared by NIWA – presumably the best effort they are capable of producing. NZCSC is about to receive what it asked for. On the face of it, there’s nothing much left for the Court to adjudicate.
  • NIWA has been forced to withdraw its earlier temperature record and replace it with a new one. Treadgold quite clearly states that "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century" and that "the new temperature record shows no evidence of a connection with global warming." Earlier in the article he also stresses the role of the CSC in achieving these revisions, saying "after 12 months of futile attempts to persuade the public, misleading answers to questions in the Parliament from ACT and reluctant but gradual capitulation from NIWA, their relentless defence of the old temperature series has simply evaporated. They’ve finally given in, but without our efforts the faulty graph would still be there."
  • All this leads me to believe that if I look at the website of NIWA I will see a retraction of the earlier position and a new position that New Zealand has experienced no unusual warming. This is easy enough to check. I go there. Actually, I search for it to find the exact page. Here is the 7SS page on the NIWA site. Am I surprised that NIWA have retracted nothing and that in fact their revised graph shows similar results? Not really. However, I am somewhat surprised by this page on the Climate Conversation Group website which claims that the 7SS temperature record is as dead as the parrot in the Monty Python sketch. It says "On the eve of Christmas, when nobody was looking, NIWA declared that New Zealand had a new official temperature record (the NZT7) and whipped the 7SS off its website." However, I've already seen that this is not true. Perhaps there was once a 7SS graph and information about the temperature record on the site's homepage that can no longer be seen. I don't know. I can only speculate. I know that there is a section on the NIWA site about the 7SS temperature record that contains a number of graphs and figures and discusses recent revisions. It has been updated as recently as December 2010, last month. The NIWA page talks all about the 7SS series and has a heading that reads "Our new analysis confirms the warming trend".
  • The CCG page claims that the new NZT7 is not in fact a revision but rather a replacement. Although it results in a similar curve, the adjustments that were made are very different. Frankly I can't see how that matters at the end of the day. Now, I don't really know whether I can believe that the NIWA analysis is true, but what I am in no doubt of whatsoever is that the statements made by Richard Treadgold that were quoted in so many places are at best misleading. The NIWA has not changed its position in the slightest. The assertion that the NIWA have admitted that New Zealand has not warmed much since 1960 is a politician's careful argument. Both analyses showed the same result. This is a fact that NIWA have not disputed; however, they still maintain a connection to global warming. A document explaining the revisions talks about why the warming has slowed after 1960: The unusually steep warming in the 1940-1960 period is paralleled by an unusually large increase in northerly flow* during this same period. On a longer timeframe, there has been a trend towards less northerly flow (more southerly) since about 1960. However, New Zealand temperatures have continued to increase over this time, albeit at a reduced rate compared with earlier in the 20th century. This is consistent with a warming of the whole region of the southwest Pacific within which New Zealand is situated.
  • Denialists have taken Treadgold's misleading mantra and spread it far and wide including on Twitter and fringe websites, but it is faulty as I've just demonstrated. Why do people do this? Perhaps they are hoping that others won't check the sources. Most people don't. I hope this serves as a lesson for why you always should.
Weiye Loh

Do Americans trust the news media? (OneNewsNow.com) - 1 views

  • newly released poll by Sacred Heart Universitiy.&nbsp;The SHU Polling Institute recently conducted its third survey on "Trust and Satisfaction with the National News Media."&nbsp; It's a national poll intended to answer the question of whether Americans trust the news media.&nbsp; In a nutshell, the answer is a resounding "No!"
  • Pollsters asked which television news organizations people turned to most frequently.&nbsp; CBS News didn't even make the top five!&nbsp; Who did?&nbsp; Fox News was first by a wide margin of 28.4 percent.&nbsp; CNN was second, chosen by 14.9 percent.&nbsp; NBC News, ABC News, and "local news" followed, while CBS News lagged way behind with only 7.4 percent.
  • On the question of media bias, a whopping 83.6 percent agree that national news media organizations are "very or somewhat biased."&nbsp;
  • ...3 more annotations...
  • Which media outlet is most trusted to be accurate today?&nbsp; Again, Fox News took first place with a healthy margin of 30 percent.&nbsp; CNN followed with 19.5 percent, NBC News with 7.5 percent, and ABC News with 7.5 percent.
  • we see a strong degree of polarization and political partisanship in the country in general, we see a similar trend in the media."&nbsp; That probably explains why Fox News is also considered the least trusted, according to the SHU poll.&nbsp; Viewers seem to either love or hate Fox News.
    • Weiye Loh
       
      So is Fox News the most trusted or the least trusted according to the SHU poll? Or both? And if it's both, how exactly is the survey carried out? Aren't survey options supposed to be mutually exclusive and exhaustive? Maybe SHU has no course on research methods.
  • only 24.3 percent of the SHU respondents say they believe "all or most news media reporting."&nbsp; They also overwhelmingly (86.6 percent) believe "that the news media have their own political and public policy positions and attempt to influence public opinion."&nbsp;
    • Weiye Loh
       
      They believe that media attempts to influence. But they also believe that media is biased. Logically then, they don't trust and believe the media. Does that mean that media has no influence? If so, why are they worried then? Third-person perception? Or they simply believe that they are holier-than-thou? Are they really more objective? What is objectivity anyway if not a social construst.
  •  
    One biased news source reporting about the biasness of other news sources. Shows that (self-)reflexivity is key in reading.
Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

Weiye Loh

Skepticblog » Why are textbooks so expensive? - 0 views

  • As an author, I’ve seen how the sales histories of textbooks work. Typically they have a big spike of sales for the first 1-2 years after they are introduced, and that’s when most the new copies are sold and most of the publisher’s money is made. But by year 3 &nbsp;(and sometimes sooner), the sales plunge and within another year or two, the sales are miniscule. The publishers have only a few options in a situation like this. One option: they can price the book so that the first two years’ worth of sales will pay their costs back before the used copies wipe out their market, which is the major reason new copies cost so much. Another option (especially with high-volume introductory textbooks) is to revise it within 2-3 years after the previous edition, so the new edition will drive all the used copies off the shelves for another two years or so. This is also a common strategy. For my most popular books, the publisher expected me to be working on a new edition almost as soon as the previous edition came out, and 2-3 years later, the new edition (with a distinctive new cover, and sometimes with significant new content as well) starts the sales curve cycle all over again. One of my books is in its eighth edition, but there are introductory textbooks that are in the 15th or 20th edition.
  • For over 20 years now, I’ve heard all sorts of prophets saying that paper textbooks are dead, and predicting that all textbooks would be electronic within a few years. Year after year, I &nbsp;hear this prediction—and paper textbooks continue to sell just fine, thank you. &nbsp;Certainly, electronic editions of mass market best-sellers, novels and mysteries (usually cheaply produced with few illustrations) seem to do fine as Kindle editions or eBooks, and that market is well established. But electronic textbooks have never taken off, at least in science textbooks, despite numerous attempts to make them work. Watching students study, I have a few thoughts as to why this is: Students seem to feel that they haven’t “studied” unless they’ve covered their textbook with yellow highlighter markings. Although there are electronic equivalents of the highlighter marker pen, most of today’s students seem to prefer physically marking on a real paper book. Textbooks (especially science books) are heavy with color photographs and other images that don’t often look good on a tiny screen, don’t print out on ordinary paper well, but raise the price of the book. Even an eBook is going to be a lot more expensive with lots of images compared to a mass-market book with no art whatsoever. I’ve watched my students study, and they like the flexibility of being able to use their book just about anywhere—in bright light outdoors away from a power supply especially. Although eBooks are getting better, most still have screens that are hard to read in bright light, and eventually their battery will run out, whether you’re near a power supply or not. Finally, if &nbsp;you drop your eBook or get it wet, you have a disaster. A textbook won’t even be dented by hard usage, and unless it’s totally soaked and cannot be dried, it does a lot better when wet than any electronic book.
  • A recent study found that digital textbooks were no panacea after all. Only one-third of the students said they were comfortable reading e-textbooks, and three-fourths preferred a paper textbook to an e-textbook if the costs were equal. And the costs have hidden jokers in the deck: e-textbooks may seem cheaper, but they tend to have built-in expiration dates and cannot be resold, so they may be priced below paper textbooks but end up costing about the same. E-textbooks are not that much cheaper for publishers, either, since the writing, editing, art manuscript, promotion, etc., all cost the publisher the same whether the final book is in paper or electronic. The only cost difference is printing and binding and shipping and storage vs. creating the electronic version.
  •  
    But in the 1980s and 1990s, the market changed drastically with the expansion of used book recyclers. They set up shop at the bookstore door near the end of the semester and bought students' new copies for pennies on the dollar. They would show up in my office uninvited and ask if I want to sell any of the free adopter's copies that I get from publishers trying to entice me. If you walk through any campus bookstore, nearly all the new copies have been replaced by used copies, usually very tattered and with broken spines. The students naturally gravitate to the cheaper used books (and some prefer them because they like it if a previous owner has highlighted the important stuff). In many bookstores, there are no new copies at all, or just a few that go unsold. What these bargain hunters don't realize is that every used copy purchased means a new copy unsold. Used copies pay nothing to the publisher (or the author, either), so to recoup their costs, publishers must price their new copies to offset the loss of sales by used copies. And so the vicious circle begins-publisher raises the price on the book again, more students buy used copies, so a new copy keeps climbing in price.
Weiye Loh

Churnalism or news? How PRs have taken over the media | Media | The Guardian - 0 views

  • The website, churnalism.com, created by charity the Media Standards Trust, allows readers to paste press releases into a "churn engine". It then compares the text with a constantly updated database of more than 3m articles. The results, which give articles a "churn rating", show the percentage of any given article that has been reproduced from publicity material.The Guardian was given exclusive access to churnalism.com prior to launch. It revealed how all media organisations are at times simply republishing, verbatim, material sent to them by marketing companies and campaign groups.
  • Meanwhile, an independent film-maker, Chris Atkins, has revealed how he duped the BBC into running an entirely fictitious story about Downing Street's new cat to coincide with the site's launch.

    The director created a Facebook page in the name of a fictitious character, "Tim Sutcliffe", who claimed the cat – which came from Battersea Cats Home – had belonged to his aunt Margaret. The story appeared in the Daily Mail and Metro, before receiving a prominent slot on BBC Radio 5 Live.

    BBC Radio 5 Live's Gaby Logan talks about a fictitious cat story Link to this audio

    Atkins, who was not involved in creating churnalism.com, uses spoof stories to highlight the failure of journalists to corroborate stories. He was behind an infamous prank last year that led to the BBC running a news package on a hoax Youtube video purporting to show urban foxhunters.

  • The creation of churnalism.com is likely to unnerve overworked journalists and the press officers who feed them. "People don't realise how much churn they're being fed every day," said Martin Moore, director of the trust, which seeks to improve standards in news. "Hopefully this will be an eye-opener."
  • ...2 more annotations...
  • Interestingly, all media outlets appear particularly susceptible to PR material disseminated by supermarkets: the Mail appears to have a particular appetite for publicity from Asda and Tesco, while the Guardian favours Waitrose releases.
  • Moore said one unexpected discovery has been that the BBC news website appears particularly prone to churning publicity material."Part of the reason is presumably because they feel a duty to put out so many government pronouncements," Moore said. "But the BBC also has a lot to produce in regions that the newspapers don't cover."
Weiye Loh

Jonathan Stray » Measuring and improving accuracy in journalism - 0 views

  • Accuracy is a hard thing to measure because it’s a hard thing to define. There are subjective and objective errors, and no standard way of determining whether a reported fact is true or false
  • The last big study of mainstream reporting accuracy found errors (defined below) in 59% of 4,800 stories across 14 metro newspapers. This level of inaccuracy — where about one in every two articles contains an error — has persisted for as long as news accuracy has been studied, over seven decades now.
  • With the explosion of available information, more than ever it’s time to get serious about accuracy, about knowing which sources can be trusted.&nbsp;Fortunately, there are emerging techniques that might help us to measure media accuracy cheaply, and then increase it.
  • ...7 more annotations...
  • We could continuously sample a news source’s output to produce ongoing accuracy estimates, and build social software to help the audience report and filter errors. Meticulously applied, this approach would give a measure of the accuracy of each information source, and a measure of the efficiency of their corrections process (currently only about 3% of all errors are corrected.)
  • Real world reporting isn’t always clearly “right” or “wrong,” so it will often be hard to decide whether something is an error or not. But we’re not going for ultimate Truth here, &nbsp;just a general way of measuring some important aspect of the idea we call “accuracy.” In practice it’s important that the error counting method is simple, clear and repeatable,&nbsp;so that you can compare error rates of different times and sources.
  • Subjective errors, though by definition involving judgment, should not be dismissed as merely differences in opinion. Sources found such errors to be about as common as factual errors and often more egregious [as rated by the sources.] But subjective errors are a very complex category
  • One of the major problems with previous news accuracy metrics is the effort and time required to produce them. In short, existing accuracy measurement methods are expensive and slow. I’ve been wondering if we can do better, and a simple idea comes to mind: sampling. The core idea is this: news sources could take an ongoing random sample of their output and check it for accuracy — a fact check spot check
  • Standard statistical theory tells us what the error on that estimate will be for any given number of samples (If I’ve got this right, the relevant formula is&nbsp;standard error of a population proportion estimate without replacement.) At a sample rate of a few stories per day, daily estimates of error rate won’t be worth much. But weekly and monthly aggregates will start to produce useful accuracy estimates
  • the first step would be&nbsp;admitting how&nbsp;inaccurate&nbsp;journalism has historically been. Then we have to come up with standardized&nbsp;accuracy evaluation procedures, in pursuit of metrics that capture enough of what we mean by “true” to be worth optimizing. Meanwhile, we can ramp up the efficiency of our online corrections processes until we find as many useful, legitimate errors as possible with as little staff time as possible.&nbsp;It might also be possible do data mining on types of errors and types of stories to figure out if there are patterns in how an organization fails to get facts right.
  • I’d love to live in a world where I could compare the accuracy of information sources, where errors got found and fixed with crowd-sourced ease, and where news organizations weren’t shy about telling me what they did and did not know. Basic factual accuracy is far from the only measure of good journalism, but perhaps it’s an improvement over the current&nbsp;sad state of affairs
  •  
    Professional journalism is supposed to be "factual," "accurate," or just plain true. Is it? Has news accuracy been getting better or worse in the last decade? How does it vary between news organizations, and how do other information sources rate? Is professional journalism more or less accurate than everything else on the internet? These all seem like important questions, so I've been poking around, trying to figure out what we know and don't know about the accuracy of our news sources. Meanwhile, the online news corrections process continues to evolve, which gives us hope that the news will become more accurate in the future.
Weiye Loh

Are partisan news sources polarizing Americans? | Ars Technica - 0 views

  •  
    "By separating people into "news seekers" (those who said they'd prefer to watch the news programs) and "entertainment seekers," an interesting pattern is revealed. Entertainment seekers who were assigned to watch one of the partisan programs (much to their disappointment) were actually much more influenced by them than news seekers watching the same shows. News seekers are presumably more aware of current political debates and may have already formed their own opinions, making new information less likely to change their thinking."
Weiye Loh

Will you ever trust a pixel again? - The Economist - 0 views

  •  
    "First, false news is tricking people. During the presidential election last year, Facebook users saw articles falsely claiming that the pope had endorsed Donald Trump and that Hillary Clinton was at the centre of a paedophile ring. Many readers were inclined to believe these false stories because they didn't come from the traditional news sources-trust in which is pretty low. This is due in part to the second thing: America's president and those around him are denouncing news stories that portray them negatively, eroding trust in news and the organisations that report it. The same is true of Russia's president and others. So we have to deal with both actually false news and powerful individuals and institutions denouncing genuine news as false."
Weiye Loh

Does patent/ copyright stifle or promote innovation? - 6 views

From a Critical Ethic perspective, Who do patents and copyrights protect? What kind of ideologies underly such a policy? I would argue that it is the capitalist ideologies, individualist ideolo...

MS Word patent copyright

Weiye Loh

Why do we care where we publish? - 0 views

  • being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, or PNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.
  • If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.
  • Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.
  • ...17 more annotations...
  • How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.
  • So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science, Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?
  • The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.
  • Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.
  • as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.
  • And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.
  • This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.
  • articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.
  • If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.
  • The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.
  • Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.
  • Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.
  • That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.
  • I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.
  • The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.
  • scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.
  • Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.
Weiye Loh

After Egypt, now with tsunami news, CNA again a disgrace « Yawning Bread on W... - 0 views

  • icking from one channel to another, I often had to go past Channel NewsAsia (CNA). On two occasions, I stopped for a while to see for myself how they were reporting the Egyptian uprising compared to the others. It was pathetic.&nbsp; Their reports were not timely, nor had they depth. Where Al Jazeera and the BBC had leading figures like Mohamed El Baradei and Amr Moussa on camera, together with regular on-scene interviews or phone interviews with the protestors themselves, and even CNN had the Facebook organiser Wael Ghonim, all CNA had was an unknown lecturer in Middle Eastern Studies from some institute or other in Singapore giving a thoroughly theoretical take, not on unfolding events, but on the background. And in a stiff studio setting.
  • This weekend, the bad news is the Richter 8.9 earthquake off the coast of Miyagi prefecture of Japan that produced a tsunami that was 10 metres high in places.
  • when I was at my father’s place, I wanted an update. All we had was CNA an so I turned to it for the eleven o’clock news. They had a reporter reporting from Tokyo about how transport systems in the capital city was paralysed last night and people walked for hours to get home. This topic was already covered on last night’s news; it is being covered again tonight. No other news agency with any self-respect is making “walking home” such a big news story (or any news story at all) when people are dying. CNA then followed that up with reports from Changi airport about flights cancelled and how passengers were inconvenienced. Thirdly, they had an earth scientist on air to explain what causes tsunamis. To soak up the time, he then had to field about four questions from the host repeatedly asking him whether tsunamis could be predicted — as if this was the burning issue at the moment.
  • ...1 more annotation...
  • In the entire news bulletin, almost nothing was mentioned about the areas where the earthquake was most severe and the tsunami most devastating (i.e. the Sendai area). There was hardly any footage, no on-the-spot reporting, no casualty figures, nothing about how victims are putting up. OK, to be fair there were a few seconds showing people queuing up to get food and drinking water at one shop. Not a word about 10,000 people missing from Minamisanriku. Not even about rescue teams struggling to get to the worst areas. Amazingly, not a word too was said about the nuclear plants with overheating cores, or the hurried evacuations (that I learnt about online), at first 3 km radius, then 10 km, and now 20 km. . .&nbsp; suggesting that the situation is probably out of control and may be becoming critical. To CNA, it is apparently not news. What was news was how horrid it was that middle-class Singaporeans were stuck at the airport unable to go on holiday.
Weiye Loh

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

EdgeRank: The Secret Sauce That Makes Facebook's News Feed Tick - 0 views

  • but News Feed only displays a subset of the stories generated by your friends —&nbsp;if it displayed everything, there’s a good chance you’d be overwhelmed. Developers are always trying to make sure their sites and apps are publishing stories that make the cut, which has led to the concept of “News Feed Optimization”, and their success is dictated by EdgeRank.
  • At a high level, the EdgeRank formula is fairly straightforward. But first, some definitions: every item that shows up in your News Feed is considered an Object. If you have an Object in the News Feed (say, a status update), whenever another user interacts with that Object they’re creating what Facebook calls an Edge, which includes actions like tags and comments. Each Edge has three components important to Facebook’s algorithm: First, there’s an affinity score between the viewing user and the item’s creator — if you send your friend a lot of Facebook messages and check their profile often, then you’ll have a higher affinity score for that user than you would, say, an old acquaintance you haven’t spoken to in years. Second, there’s a weight given to each type of Edge. A comment probably has more importance than a Like, for example. And finally there’s the most obvious factor —&nbsp;time. The older an Edge is, the less important it becomes.
  • Multiply these factors for each Edge then add the Edge scores up and you have an Object’s EdgeRank. And the higher that is, the more likely your Object is to appear in the user’s feed. It’s worth pointing out that the act of creating an Object is also considered an Edge, which is what allows Objects to show up in your friends’ feeds before anyone has interacted with them.
  • ...3 more annotations...
  • an Object is more likely to show up in your News Feed if people you know have been interacting with it recently. That really isn’t particularly surprising. Neither is the resulting message to developers: if you want your posts to show up in News Feed, make sure people will actually want to interact with them.
  • Steinberg hinted that a simpler version of News Feed may be on the way, as the current two-tabbed system is a bit complicated. That said, many people still use both tabs, with over 50% of users clicking over to the ‘most recent’ tab on a regular basis.
  • If you want to watch the video for yourself, click here, navigate to the Techniques sessions, and click on ‘Focus on Feed’. The talk about Facebook’s algorithms begins around 22 minutes in.
Weiye Loh

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • Internet-based news websites and the growing popularity of social media have broken the mainstream media's monopoly on news - though not completely. Singapore's PAP-led government was one of the first in the world to devise content regulations for the Internet, issuing restrictions on topics it deemed as sensitive as early as 1996.
  • While political parties are broadly allowed to use the Internet to campaign, they were previously prohibited from employing some of the medium's most powerful features, including live audio and video streaming and so-called "viral marketing". Websites not belonging to political parties or candidates but registered as political sites have been banned from activities that could be considered online electioneering.
  • George argued that despite the growing influence of online media, it would be naive to conclude that the PAP's days of domination are numbered. "While the government appears increasingly liberal towards individual self-expression, it continues to intervene strategically at points at which such expression may become politically threatening," he said. "It is safe to assume that the government's digital surveillance capabilities far outstrip even its most technologically competent opponent's evasive abilities."
  • ...2 more annotations...
  • consistent with George's analysis, authorities last week relaxed past regulations that limited the use of the Internet and social media for election campaigning. Political parties and candidates will be allowed to use a broader range of new media platforms, including blogs, micro-blogs, online photo-sharing platforms, social networking sites and electronic media applications used on mobile phones, for election advertising. The loosening, however, only applies for political party-run websites, chat rooms and online discussion forums. Candidates must declare the new media content they intend to use within 12 hours after the start of the election campaign period. George warned in a recent blog entry that the new declaration requirements could open the way for PAP-led defamation suits against new media using opposition politicians. PAP leaders have historically relied on expensive litigation to suppress opposition and media criticism. "The PAP won't subject everyone's postings to legal scrutiny. But if it decides that a particular opposition politician needs to be utterly demolished, you can bet that no tweet of his would be too tiny, no Facebook update too fleeting ... in order a build the case against the individual," George warned in a journalism blog.
  • While opposition politicians will rely more on new than mainstream media to communicate with voters, they already recognize that the use of social media will not necessarily translate into votes. "[Online support] can give a too rosy a picture and false degree of comfort," said the RP's Jeyaretnam. "People who [interact with] us online are those who are already convinced with our messages anyway."
Weiye Loh

Have you heard of the Koch Brothers? | the kent ridge common - 0 views

  • I return to the Guardian online site expressly to search for those elusive articles on Wisconsin. The main page has none. I click on News – US, and there are none. I click on ‘Commentary is Free’- US, and find one article on protests in Ohio. I go to the New York Times online site. Earlier, on my phone, I had seen one article at the bottom of the main page on Wisconsin. By the time I managed to get on my computer to find it again however, the NYT main page was quite devoid of any articles on the protests at all. I am stumped; clearly, I have to reconfigure my daily news sources and reading diet.
  • It is not that the media is not covering the protests in Wisconsin at all – but effective media coverage in the US at least, in my view, is as much about volume as it is about substantive coverage. That week, more prime-time slots and the bulk of the US national attention were given to Charlie Sheen and his crazy antics (whatever they were about, I am still not too sure) than to Libya and the rest of the Middle East, or more significantly, to a pertinent domestic issue, the teacher protests &nbsp;- not just in Wisconsin but also in other cities in the north-eastern part of the US.
  • In the March 2nd episode of The Colbert Report, it was shown that the Fox News coverage of the Wisconsin protests had re-used footage from more violent protests in California (the palm trees in the background gave Fox News away). Bill O’Reilly at Fox News had apparently issued an apology – but how many viewers who had seen the footage and believed it to be on-the-ground footage of Wisconsin would have followed-up on the report and the apology? And anyway, why portray the teacher protests as violent?
  • ...12 more annotations...
  • In this New York Times’ article, “Teachers Wonder, Why the scorn?“, the writer notes the often scathing comments from counter-demonstrators – “Oh you pathetic teachers, read the online comments and placards of counterdemonstrators.&nbsp;You are glorified baby sitters who leave work at 3 p.m. You deserve minimum wage.” What had begun as an ostensibly ‘economic reform’ targeted at teachers’ unions has gradually transmogrified into a kind of “character attack” to this section of American society – teachers are people who wage violent protests (thanks to borrowed footage from the West Coast) and they are undeserving of their economic benefits, and indeed treat these privileges as ‘rights’. The ‘war’ is waged on multiple fronts, economic, political, social, psychological even — or at least one gets this sort of picture from reading these articles.
  • as Singaporeans with a uniquely Singaporean work ethic, we may perceive functioning ‘trade unions’ as those institutions in the so-called “West” where they amass lots of membership, then hold the government ‘hostage’ in order to negotiate higher wages and benefits. Think of trade unions in the Singaporean context, and I think of SIA pilots. And of LKY’s various firm and stern comments on those issues. Think of trade unions and I think of strikes in France, in South Korea, when I was younger, and of my mum saying, “How irresponsible!” before flipping the TV channel.
  • The reason why I think the teachers’ protests should not be seen solely as an issue about trade-unions, and evaluated myopically and naively in terms of whether trade unions are ‘good’ or ‘bad’ is because the protests feature in a larger political context with the billionaire Koch brothers at the helm, financing and directing much of what has transpired in recent weeks. Or at least according to certain articles which I present here.
  • In this NYT article entitled “Billionaire Brothers’ Money Plays Role in Wisconsin Dispute“, the writer noted that Koch Industries had been “one of the biggest contributors to the election campaign of Gov.&nbsp;Scott Walker of Wisconsin, a Republican who has championed the proposed cuts.” Further, the president of Americans for Prosperity, a nonprofit group financed by the Koch brothers, had reportedly addressed counter-demonstrators last Saturday saying that “the cuts were not only necessary, but they also represented the start of a much-needed nationwide move to slash public-sector union benefits.” and in his own words -“ ‘We are going to bring fiscal sanity back to this great nation’ ”. All this rhetoric would be more convincing to me if they weren’t funded by the same two billionaires who financially enabled Walker’s governorship.
  • I now refer you to a long piece by Jane Mayer for The New Yorker titled, “Covert Operations: The billionaire brothers who are waging a war against Obama“. According to her, “The Kochs are longtime libertarians who believe in drastically lower personal and corporate taxes, minimal social services for the needy, and much less oversight of industry—especially environmental regulation. These views dovetail with the brothers’ corporate interests.”
  • Their libertarian modus operandi involves great expenses in lobbying, in political contributions and in setting up think tanks. From 2006-2010, Koch Industries have led energy companies in political contributions; “[i]n the second quarter of 2010, David Koch was the biggest individual contributor to the Republican Governors Association, with a million-dollar donation.”&nbsp;More statistics, or at least those of the non-anonymous donation records, can be found on page 5 of Mayer’s piece.
  • Naturally, the Democrats also have their billionaire donors, most notably in the form of George Soros. Mayer writes that he has made ‘generous private contributions to various Democratic campaigns, including Obama’s.” Yet what distinguishes him from the Koch brothers here is, as Michael Vachon, his spokesman, argued, ‘that Soros’s giving is transparent, and that “none of his contributions are in the service of his own economic interests.” ‘ Of course, this must be taken with a healthy dose of salt, but I will note here that in Charles Ferguson’s documentary&nbsp;Inside Job, which was about the 2008 financial crisis,&nbsp;George Soros was one of those interviewed who was not portrayed negatively. (My review of it is&nbsp;here.)
  • Of the Koch brothers’ political investments, what interested me more was the US’ “first libertarian thinktank”, the Cato Institute. Mayer writes, ‘When President Obama, in a 2008 speech, described the science on global warming as “beyond dispute,” the Cato Institute took out a full-page ad in the&nbsp;Times to contradict him. Cato’s resident scholars have relentlessly criticized political attempts to stop global warming as expensive, ineffective, and unnecessary. Ed Crane, the Cato Institute’s founder and president, told [Mayer] that “global-warming theories give the government more control of the economy.” ‘
  • K Street refers to a major street in Washington, D.C. where major think tanks, lobbyists and advocacy groups are located.
  • with recent developments as the Citizens United case where corporations are now ‘persons’ and have no caps in political contributions, the Koch brothers are ever better-positioned to take down their perceived big, bad government and carry out their ideological agenda as sketched in Mayer’s piece
  • with much important news around the world jostling for our attention – earthquake in Japan, Middle East revolutions – the passing of an anti-union bill (which finally happened today, for better or for worse) in an American state is unlikely to make a headline able to compete with natural disasters and revolutions. Then, to quote Wisconsin Governor Scott Walker during that prank call conversation, “Sooner or later the media stops finding it [the teacher protests] interesting.”
  • What remains more puzzling for me is why the American public seems to buy into the Koch-funded libertarian rhetoric. Mayer writes, ‘ “Income inequality in America is greater than it has been since the nineteen-twenties, and since the seventies the tax rates of the wealthiest have fallen more than those of the middle class. Yet the brothers’ message has evidently resonated with voters: a recent poll found that fifty-five per cent of Americans agreed that Obama is a socialist.” I suppose that not knowing who is funding the political rhetoric makes it easier for the public to imbibe it.
Weiye Loh

True Enough : CJR - 0 views

  • The dangers are clear. As PR becomes ascendant, private and government interests become more able to generate, filter, distort, and dominate the public debate, and to do so without the public knowing it. “What we are seeing now is the demise of journalism at the same time we have an increasing level of public relations and propaganda,” McChesney said. “We are entering a zone that has never been seen before in this country.”
  • Michael Schudson, a journalism professor at Columbia University, cjr contributor, and author of Discovering the News, said modern public relations started when Ivy Lee, a minister’s son and a former reporter at the New York World, tipped reporters to an accident on the Pennsylvania Railroad. Before then, railroads had done everything they could to cover up accidents. But Lee figured that crashes, which tend to leave visible wreckage, were hard to hide. So it was better to get out in front of the inevitable story. The press release was born. Schudson said the rise of the “publicity agent” created deep concern among the nation’s leaders, who distrusted a middleman inserting itself and shaping messages between government and the public. Congress was so concerned that it attached amendments to bills in 1908 and 1913 that said no money could be appropriated for preparing newspaper articles or hiring publicity agents.
  • But World War I pushed those concerns to the side. The government needed to rally the public behind a deeply unpopular war. Suddenly, publicity agents did not seem so bad.
  • ...7 more annotations...
  • “After the war, PR becomes a very big deal,” Schudson said. “It was partly stimulated by the war and the idea of journalists and others being employed by the government as propagandists.” Many who worked for the massive wartime propaganda apparatus found an easy transition into civilian life.
  • People “became more conscious that they were not getting direct access, that it was being screened for them by somebody else,” Schudson said. But there was no turning back. PR had become a fixture of public life. Concern about the invisible filter of public relations became a steady drumbeat in the press
  • When public relations began its ascent in the early twentieth century, journalism was rising alongside it. The period saw the ferocious work of the muckrakers, the development of the great newspaper chains, and the dawn of radio and, later, television. Journalism of the day was not perfect; sometimes it was not even good. But it was an era of expansion that eventually led to the powerful press of the mid to late century.
  • Now, during a second rise of public relations, we are in an era of massive contraction in traditional journalism. Bureaus have closed, thousands of reporters have been laid off, once-great newspapers like the Rocky Mountain News have died. The Pew Center took a look at the impact of these changes last year in a study of the Baltimore news market. The report, “How News Happens,” found that while new online outlets had increased the demand for news, the number of original stories spread out among those outlets had declined. In one example, Pew found that area newspapers wrote one-third the number of stories about state budget cuts as they did the last time the state made similar cuts in 1991. In 2009, Pew said, The Baltimore Sun produced 32 percent fewer stories than it did in 1999.
  • even original reporting often bore the fingerprints of government and private public relations. Mark Jurkowitz, associate director the Pew Center, said the Baltimore report concentrated on six major story lines: state budget cuts, shootings of police officers, the University of Maryland’s efforts to develop a vaccine, the auction of the Senator Theater, the installation of listening devices on public busses, and developments in juvenile justice. It found that 63 percent of the news about those subjects was generated by the government, 23 percent came from interest groups or public relations, and 14 percent started with reporters.
  • The Internet makes it easy for public relations people to reach out directly to the audience and bypass the press, via websites and blogs, social media and videos on YouTube, and targeted e-mail.
  • Some experts have argued that in the digital age, new forms of reporting will eventually fill the void left by traditional newsrooms. But few would argue that such a point has arrived, or is close to arriving. “There is the overwhelming sense that the void that is created by the collapse of traditional journalism is not being filled by new media, but by public relations,” said John Nichols, a Nation correspondent and McChesney’s co-author. Nichols said reporters usually make some calls and check facts. But the ability of government or private public relations to generate stories grows as reporters have less time to seek out stories on their own. That gives outside groups more power to set the agenda.
  •  
    In their recent book, The Death and Life of American Journalism, Robert McChesney and John Nichols tracked the number of people working in journalism since 1980 and compared it to the numbers for public relations. Using data from the US Bureau of Labor Statistics, they found that the number of journalists has fallen drastically while public relations people have multiplied at an even faster rate. In 1980, there were about .45 PR workers per one hundred thousand population compared with .36 journalists. In 2008, there were .90 PR people per one hundred thousand compared to .25 journalists. That's a ratio of more than three-to-one, better equipped, better financed.
Weiye Loh

News of the World phone-hacking scandal - live updates | Media | guardian.co.uk - 0 views

  •  
    News of the World phone-hacking scandal -As it happened* Andy Coulson and Clive Goodman arrested * Guardian reveals it warned Cameron over Coulson* Both being held at separate south London police stations* Cameron announces two inquiries, one by judge* Ofcom expected to announce investigation of News Corp * News International exec may have deleted emails
Weiye Loh

journalism.sg » Personalised news the way to go? - 0 views

  • American university professor predicted that the mass media will lose its status as the world’s primary information source
  • Instead, people will demand customised content to suit their individual needs, he said. “People will increasingly have the ability to choose news and information according to their individual interests,” he told 400 media professionals, lecturers and students at the Singapore Press Holdings’ Media in Transition Lecture Series.
  • Crosbie said that this new media world might develop as a result of today’s information overload. Tailoring news to each reader could help address the million-dollar question of how to continue making money from print media at a time when online news is flourishing and free, he said.
  •  
    PERSONALISED NEWS THE WAY TO GO? July 16th, 2010
1 - 20 of 607 Next › Last »
Showing 20 items per page