Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Reality" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Weiye Loh

Information about information | plus.maths.org - 0 views

  • since what we actually experience depends on us observing the world (via our measuring devices), reality is shaped by answers to yes/no questions. For example, is the electron here or is it not? Is its spin pointing up or pointing down? Answers to questions are information — the yes and no in English language correspond to the 0 and 1 in computer language. Thus, information is fundamental to physical reality. As the famous physicist John Archibald Wheeler put it, the "It" we observe around us comes from the "Bit" that encodes information: "It from bit". Is this really true?
  •  
    "what exactly is information? We tend to think of it as human made, but since we're all a result of our DNA sequence, perhaps we should think of humans as being made of information."
Weiye Loh

Rationally Speaking: Why do libertarians deny climate change? - 0 views

  • the trend is hard to miss. The libertarian think tank CATO Institute has been waging a media war against the very notion for years, and even prominent skeptics with libertarian leanings have pronounced themselves negatively on the matter (most famously Penn & Teller, and initially even Michael Shermer, though both — I count P&T as one — lately have taken a few steps back from their initial positions).
  • whether climate change is real or not. It is, according to the best science available. Yes, even the best science can be wrong, but frankly the only people who can tell with any degree of reasonability are those belonging to the relevant community of experts, in this case climate scientists
  • The question is particularly pertinent to libertarians and the ideologically close allied group of “objectivists,” i.e. followers of Ayn Rand (though there are significant differences between the two groups, as I mentioned before). These people often claim to be friends of science (as opposed to many radical conservatives like Senator James Inhofe (R-Okla), who called global warming the “greatest hoax ever perpetrated on the American people” (perpetrated by whom? And to what end?)), and in the case of objectivists, whose whole approach to politics is allegedly based on rational considerations of the facts.
  • ...6 more annotations...
  • one would think that libertarians could make a distinction between evidence-based interpretation of reality (global warming is happening), and whatever policies we might want to enact to avoid catastrophe. Qua Qua libertarians, they would obviously resist any government-led effort at clean up, especially if internationally coordinated, preferring instead a coalition of the willing within the private sector
  • there certainly is plenty of room for reasonable discussions and disagreements about how best to proceed in confronting the problem. On the other hand, there doesn’t seem to be much room for reasonable disagreement about the very existence of the problem itself. So, what gives, my dear libertarians?
  • . In the case of major libertarian outlets, like the CATO Institute think tank, the rather unglamorous answer may simply be that they are in the pockets of the oil industry. A large amount of the funding for CATO comes from private corporations with obvious political agendas including, you guessed it, Exxon-Mobil (remember the Valdez?). No wonder CATO people trump the party line on this one.
  • The second reason, however, is more personal and widespread: libertarianism is committed to the high moral value of private enterprise
  • it follows naturally (if irrationally) that libertarians cannot admit to themselves, and even less to the world at large, that the much vaunted private sector may be responsible — out of both greed and downright incompetence — for a major environmental catastrophe of planetary proportions. The industry is the good guy in their movie, how then could they possibly have done something so horrible?
  • hat’s the problem with ideology in general (be it left, right, or libertarian), it provides us with thick blinders that very effectively shield us from reality. Of course, no one is actually free of bias, yours truly included. But a core principle of skepticism and critical thinking is that we do our best to be aware (and minimize) our own biases, and that we ought to open ourselves to honest criticism from different parties, in pursuit of the best approximation to the truth that we can muster.
  •  
    Why do libertarians deny climate change?
Weiye Loh

Online "Toon porn" - 20 views

I must correct that never in my arguments did I mentioned that the interpreter is the problem. I was merely answering YZ's question if cartoon characters can be deemed as representative of human be...

online cartoon anime pornography ethics

Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Valerie Plame, YES! Wikileaks, NO! - English pravda.ru - 0 views

  • n my recent article Ward Churchill: The Lie Lives On (Pravda.Ru, 11/29/2010), I discussed the following realities about America's legal "system": it is duplicitous and corrupt; it will go to any extremes to insulate from prosecution, and in many cases civil liability, persons whose crimes facilitate this duplicity and corruption; it has abdicated its responsibility to serve as a "check-and-balance" against the other two branches of government, and has instead been transformed into a weapon exploited by the wealthy, the corporations, and the politically connected to defend their criminality, conceal their corruption and promote their economic interests
  • it is now evident that Barack Obama, who entered the White House with optimistic messages of change and hope, is just as complicit in, and manipulative of, the legal "system's" duplicity and corruption as was his predecessor George W. Bush.
  • the Obama administration has refused to prosecute former Attorney General John Ashcroft for abusing the "material witness" statute; refused to prosecute Ashcroft's successor (and suspected perjurer) Alberto Gonzales for his role in the politically motivated firing of nine federal prosecutors; refused to prosecute Justice Department authors of the now infamous "torture memos," like John Yoo and Jay Bybee; and, more recently, refused to prosecute former CIA official Jose Rodriquez Jr. for destroying tapes that purportedly showed CIA agents torturing detainees.
  • ...11 more annotations...
  • thanks to Wikileaks, the world has been enlightened to the fact that the Obama administration not only refused to prosecute these individuals itself, it also exerted pressure on the governments of Germany and Spain not to prosecute, or even indict, any of the torturers or war criminals from the Bush dictatorship.
  • we see many right-wing commentators demanding that Assange be hunted down, with some even calling for his murder, on the grounds that he may have endangered lives by releasing confidential government documents. Yet, for the right-wing, this apparently was not a concern when the late columnist Robert Novak "outed" CIA agent Valerie Plame after her husband Joseph Wilson authored an OP-ED piece in The New York Times criticizing the motivations for waging war against Iraq. Even though there was evidence of involvement within the highest echelons of the Bush dictatorship, only one person, Lewis "Scooter" Libby, was indicted and convicted of "outing" Plame to Novak. And, despite the fact that this "outing" potentially endangered the lives of Plame's overseas contacts, Bush commuted Libby's thirty-month prison sentence, calling it "excessive."
  • Why the disparity? The answer is simple: The Plame "outing" served the interests of the military-industrial complex and helped to conceal the Bush dictatorship's lies, tortures and war crimes, while Wikileaks not only exposed such evils, but also revealed how Obama's administration, and Obama himself, are little more than "snake oil" merchants pontificating about government accountability while undermining it at every turn.
  • When the United States Constitution was being created, a conflict emerged between delegates who wanted a strong federal government (the Federalists) and those who wanted a weak federal government (the anti-Federalists). Although the Federalists won the day, one of the most distinguished anti-Federalists, George Mason, refused to sign the new Constitution, sacrificing in the process, some historians say, a revered place amongst America's founding fathers. Two of Mason's concerns were that the Constitution did not contain a Bill of Rights, and that the presidential pardon powers would allow corrupt presidents to pardon people who had committed crimes on presidential orders.
  • Mason's concerns about the abuse of the pardon powers were eventually proven right when Gerald Ford pardoned Richard Nixon, when Ronald Reagan pardoned FBI agents convicted of authorizing illegal break-ins, and when George H.W. Bush pardoned six individuals involved in the Iran-Contra Affair.
  • Mason was also proven right after the Federalists realized that the States would not ratify the Constitution unless a Bill of Rights was added. But this was done begrudgingly, as demonstrated by America's second president, Federalist John Adams, who essentially destroyed the right to freedom of speech via the Alien and Sedition Acts, which made it a crime to say, write or publish anything critical of the United States government.
  • Most criminals break laws that others have created, and people who assist in exposing or apprehending them are usually lauded as heroes. But with the "espionage" acts, the criminals themselves have actually created laws to conceal their crimes, and exploit these laws to penalize people who expose them.
  • The problem with America's system of government is that it has become too easy, and too convenient, to simply stamp "classified" on documents that reveal acts of government corruption, cover-up, mendacity and malfeasance, or to withhold them "in the interest of national security." Given this web of secrecy, is it any wonder why so many Americans are still skeptical about the "official" versions of the John F. Kennedy or Martin Luther King Jr. assassinations, or the events surrounding the attacks of September 11, 2001?
  • I want to believe that the Wikileaks documents will change America for the better. But what undoubtedly will happen is a repetition of the past: those who expose government crimes and cover-ups will be prosecuted or branded as criminals; new laws will be passed to silence dissent; new Liebermans will arise to intimidate the corporate-controlled media; and new ways will be found to conceal the truth.
  • What Wikileaks has done is make people understand why so many Americans are politically apathetic and content to lose themselves in one or more of the addictions American culture offers, be it drugs, alcohol, the Internet, video games, celebrity gossip, text-messaging-in essence anything that serves to divert attention from the harshness of reality.
  • the evils committed by those in power can be suffocating, and the sense of powerlessness that erupts from being aware of these evils can be paralyzing, especially when accentuated by the knowledge that government evildoers almost always get away with their crimes
Weiye Loh

Measuring Social Media: Who Has Access to the Firehose? - 0 views

  • The question that the audience member asked — and one that we tried to touch on a bit in the panel itself — was who has access to this raw data. Twitter doesn’t comment on who has full access to its firehose, but to Weil’s credit he was at least forthcoming with some of the names, including stalwarts like Microsoft, Google and Yahoo — plus a number of smaller companies.
  • In the case of Twitter, the company offers free access to its API for developers. The API can provide access and insight into information about tweets, replies and keyword searches, but as developers who work with Twitter — or any large scale social network — know, that data isn’t always 100% reliable. Unreliable data is a problem when talking about measurements and analytics, where the data is helping to influence decisions related to social media marketing strategies and allocations of resources.
  • One of the companies that has access to Twitter’s data firehose is Gnip. As we discussed in November, Twitter has entered into a partnership with Gnip that allows the social data provider to resell access to the Twitter firehose.This is great on one level, because it means that businesses and services can access the data. The problem, as noted by panelist Raj Kadam, the CEO of Viralheat, is that Gnip’s access can be prohibitively expensive.
  • ...3 more annotations...
  • The problems with reliable access to analytics and measurement information is by no means limited to Twitter. Facebook data is also tightly controlled. With Facebook, privacy controls built into the API are designed to prevent mass data scraping. This is absolutely the right decision. However, a reality of social media measurement is that Facebook Insights isn’t always reachable and the data collected from the tool is sometimes inaccurate.It’s no surprise there’s a disconnect between the data that marketers and community managers want and the data that can be reliably accessed. Twitter and Facebook were both designed as tools for consumers. It’s only been in the last two years that the platform ecosystem aimed at serving large brands and companies
  • The data that companies like Twitter, Facebook and Foursquare collect are some of their most valuable assets. It isn’t fair to expect a free ride or first-class access to the data by anyone who wants it.Having said that, more transparency about what data is available to services and brands is needed and necessary.We’re just scraping the service of what social media monitoring, measurement and management tools can do. To get to the next level, it’s important that we all question who has access to the firehose.
  • We Need More Transparency for How to Access and Connect with Data
Weiye Loh

Skepticblog » Education 2.0 - 0 views

  •  
    For education 2.0 to become a reality, the use of the internet and computer technology in primary education needs to become more than an afterthought - more than just an obligatory added layer, and more than just teaching students computer skills themselves. We need a massive effort to develop a digital infrastructure dedicated to computer and internet-based learning. We need schools and teachers to experiment more, to find what computers will do best, and what they are not good for. Primarily, I think we just need the development of dedicated programs and content for education. We need the equivalent of Facebook and Twitter for primary education - killer apps, the kind that are so effective that after their incorporation people will look back and wonder what they did before the application was available.
juliet huang

tools to live forever? - 1 views

  •  
    According to a news story on nanotechnology, in the future, the wealthy will be able to make use of nanotechnology to modify parts of their existing or future genetic heritage, ie they can alter body parts in non-invasive procedures, or modify future children's anomalies. http://www.heraldsun.com.au/business/fully-frank/the-tools-to-live-forever/story-e6frfinf-1225791751968 these will then help them evolve into a different species, a better species. ethical questions: most of the issues we've talked about in ethics are at the macro level, perpetuating a social group's agenda. however, biotechnology has the potential to make this divide a reality. it's no longer an ethical question but it has the power to make what we discuss in class a reality. to frame it as an ethical perspective, who gets to decide how is the power evenly distributed? power will always be present behind the use of technologies, but who will decide how this technology is used, and for whose good? and if its for a larger good, then, who can moderate this technology usage to ensure all social actors are represented?
juliet huang

The tools to live forever ? - 1 views

According to a news story on nanotechnology, in the future, the wealthy will be able to make use of nanotechnology to modify parts of their existing or future genetic heritage, ie they can alter bo...

nanotechnology biotechnology

started by juliet huang on 28 Oct 09 no follow-up yet
Weiye Loh

When big pharma pays a publisher to publish a fake journal... : Respectful Insolence - 0 views

  • pharmaceutical company Merck, Sharp & Dohme paid Elsevier to produce a fake medical journal that, to any superficial examination, looked like a real medical journal but was in reality nothing more than advertising for Merck
  • As reported by The Scientist: Merck paid an undisclosed sum to Elsevier to produce several volumes of a publication that had the look of a peer-reviewed medical journal, but contained only reprinted or summarized articles--most of which presented data favorable to Merck products--that appeared to act solely as marketing tools with no disclosure of company sponsorship. "I've seen no shortage of creativity emanating from the marketing departments of drug companies," Peter Lurie, deputy director of the public health research group at the consumer advocacy nonprofit Public Citizen, said, after reviewing two issues of the publication obtained by The Scientist. "But even for someone as jaded as me, this is a new wrinkle." The Australasian Journal of Bone and Joint Medicine, which was published by Exerpta Medica, a division of scientific publishing juggernaut Elsevier, is not indexed in the MEDLINE database, and has no website (not even a defunct one). The Scientist obtained two issues of the journal: Volume 2, Issues 1 and 2, both dated 2003. The issues contained little in the way of advertisements apart from ads for Fosamax, a Merck drug for osteoporosis, and Vioxx.
  • there are numerous "throwaway" journals out there. "Throwaway" journals tend to be defined as journals that are provided free of charge, have a lot of advertising (a high "advertising-to-text" ratio, as it is often described), and contain no original investigations. Other relevant characteristics include: Supported virtually entirely by advertising revenue. Ads tend to be placed within article pages interrupting the articles, rather than between articles, as is the case with most medical journals that accept ads Virtually the entire content is reviews of existing content of variable (and often dubious) quality. Parasitic. Throwaways often summarize peer-reviewed research from real journals. Questionable (at best) peer review. Throwaways tend to cater to an uninvolved and uncritical readership. No original work.
Weiye Loh

TODAYonline | World | The photo that's caused a stir - 0 views

  • reporters had not specifically asked the family's permission to publish them and that his parents had not wanted the photographs to be used. "There was no question that the photo had news value," AP senior managing editor John Daniszewski said. "But we also were very aware the family wished for the picture not to be seen."After lengthy internal discussions, AP concluded that the photo was a part of the war they needed to convey.
  • The US Defence Secretary, Mr Robert Gates, condemned the decision by the news agency Associated Press (AP) to publish the picture. "I cannot imagine the pain and suffering Lance Corporal Bernard's death has caused his family. Why your organisation would purposefully defy the family's wishes, knowing full well that it will lead to yet more anguish, is beyond me,"
  • ...1 more annotation...
  • the picture illustrated the sacrifice and the bravery of those fighting in Afghanistan."We feel it is our journalistic duty to show the reality of the war there, however unpleasant and brutal that sometimes is," said Mr Santiago Lyon, director of photography for AP.
  •  
    Ethical question, when public's demand for information collides with private's demand for non-disclosure, which one should win? How do we measure the pros and cons?
  •  
    Journalistic Ethics
Jiamin Lin

Technological Freedom - 4 views

http://media.www.csucauldron.com/media/storage/paper516/news/2009/09/06/TheMeltingPot/Technological.Freedom-3759993.shtml Digital Rights Management (DRM) or should it be called "Digital Rights Mis...

started by Jiamin Lin on 16 Sep 09 no follow-up yet
Jody Poh

Bloggers bemoan Yahoo's role in writer's arrest - 3 views

http://news.cnet.com/8301-10784_3-5852898-7.html Shi Tao, a Chinese journalist is being convicted of sending a government's 'top secret' message that was sent to the newspaper agency he was workin...

online democracy freedom rights

started by Jody Poh on 15 Sep 09 no follow-up yet
Ang Yao Zong

Remember "Negarakuku"? - 3 views

http://www.mrbrown.com/blog/2007/04/muar_rapper_on_.html http://mt.m2day.org/2008/content/view/13039/84/ The two links above talk about Wee Meng Chee, a Malaysian rapper who is currently pursuing...

democracy speech freedom sedition

started by Ang Yao Zong on 15 Sep 09 no follow-up yet
Jude John

What's so Original in Academic Research? - 26 views

Thanks for your comments. I may have appeared to be contradictory, but what I really meant was that ownership of IP should not be a motivating factor to innovate. I realise that in our capitalistic...

Satveer

Spammed, scammed, jammed: Flu outbreak dominates online buzz - 10 views

This article is about online misinformation on social networking sites such as twitter and in this case it particularly pertains to health. It has got to do with swine flu and how people on Twitte...

http:__www.canada.com_entertainment_Spammed%2Bscammed%2Bjammed%2

started by Satveer on 19 Aug 09 no follow-up yet
Inosha Wickrama

ethical porn? - 50 views

I've seen that video recently. Anyway, some points i need to make. 1. different countries have different ages of consent. Does that mean children mature faster in some countries and not in other...

pornography

Inosha Wickrama

Pirate Bay Victory - 11 views

http://www.telegraph.co.uk/technology/news/4686584/Pirate-Bay-victory-after-illegal-file-sharing-charges-dropped.html Summary: The Pirate Bay, the biggest file-sharing internet site which was accu...

Elaine Ong

Turning dolls into babies - 6 views

http://www.chroniclelive.co.uk/north-east-news/todays-evening-chronicle/2007/09/11/when-does-a-doll-become-a-baby-72703-19770082/ Just to share an interesting article about how dolls nowadays are ...

started by Elaine Ong on 25 Aug 09 no follow-up yet
Elaine Ong

The gender digital divide in francophone Africa: A harsh reality - 10 views

http://www.apc.org/en/pubs/manuals/gender/africa/gender-digital-divide-francophone-africa-harsh-rea According to the principle of equality, everyone ought to have "equal entitlement to the condi...

‹ Previous 21 - 40 of 94 Next › Last »
Showing 20 items per page