Skip to main content

Home/ Politically Minded/ Group items tagged Exposure

Rss Feed Group items tagged

thinkahol *

HOLY BAILOUT - Federal Reserve Now Backstopping $75 Trillion Of Bank Of Ameri... - 0 views

  •  
    This story from Bloomberg just hit the wires this morning.  Bank of America is shifting derivatives in its Merrill investment banking unit to its depository arm, which has access to the Fed discount window and is protected by the FDIC. What this means for you is that when Europe finally implodes and banks fail, U.S. taxpayers will hold the bag for trillions in CDS insurance contracts sold by Bank of America and JP Morgan.  Even worse, the total exposure is unknown because Wall Street successfully lobbied during Dodd-Frank passage so that no central exchange would exist keeping track of net derivative exposure.
Michael Haltman

Southern Exposure, illegal immigration, Mexican border war, and Arizona immigration law... - 3 views

  •  
    Wouldn't it be a good idea to keep terrorists from moving across our border with Mexico. The Arizona immigration law is a good start!
thinkahol *

News Corpse » Study Confirms That Fox News Makes You Stupid: - 0 views

  •  
    Yet another study has been released that proves that watching Fox News is detrimental to your intelligence. World Public Opinion, a project managed by the Program on International Policy Attitudes at the University of Maryland, conducted a survey of American voters that shows that Fox News viewers are significantly more misinformed than consumers of news from other sources. What's more, the study shows that greater exposure to Fox News increases misinformation.
thinkahol *

Now That David Koch Is Gone From NIH Cancer Board, Formaldehyde Is Finally Classified A... - 0 views

  •  
    What's that word they use for a society where the group of those with money and power are above the law? Oh, that's right: Oligarchy! While this regulatory capture continued, how many of us filled up our homes with these toxic products? Via Think Progress: Large manufacturers and chemical producers have lobbied ferociously to stop the National Institutes of Health from classifying formaldehyde as a carcinogen. A wide body of research has linked the chemical to cancer, but industrial polluters have stymied regulators from action. Last year, the New Yorker's Jane Mayer reported that billionaire David Koch, whose company Georgia Pacific (a subsidiary of Koch Industries) is one of the country's top producers of formaldehyde, was appointed to the NIH cancer board at a time when the NIH delayed action on the chemical. The news was met with protests from environmental groups. Faced with mounting pressure from Greenpeace and the scientific community, Koch offered an early resignation from the board in October. Yesterday, the NIH finally handed down a report officially classifying formaldehyde as a carcinogen: Government scientists listed formaldehyde as a carcinogen, and said it is found in worrisome quantities in plywood, particle board, mortuaries and hair salons. They also said that styrene, which is used in boats, bathtubs and in disposable foam plastic cups and plates, may cause cancer but is generally found in such low levels in consumer products that risks are low. Frequent and intense exposures in manufacturing plants are far more worrisome than the intermittent contact that most consumers have, but government scientists said that consumers should still avoid contact with formaldehyde and styrene along with six other chemicals that were added Friday to the government's official Report on Carcinogens. Its release was delayed for years because of intense lobbying from the chemical industry, which disputed its findings. An investigation by ProPublica found th
Skeptical Debunker

We're so good at medical studies that most of them are wrong - 0 views

  • Statistical validation of results, as Shaffer described it, simply involves testing the null hypothesis: that the pattern you detect in your data occurs at random. If you can reject the null hypothesis—and science and medicine have settled on rejecting it when there's only a five percent or less chance that it occurred at random—then you accept that your actual finding is significant. The problem now is that we're rapidly expanding our ability to do tests. Various speakers pointed to data sources as diverse as gene expression chips and the Sloan Digital Sky Survey, which provide tens of thousands of individual data points to analyze. At the same time, the growth of computing power has meant that we can ask many questions of these large data sets at once, and each one of these tests increases the prospects than an error will occur in a study; as Shaffer put it, "every decision increases your error prospects." She pointed out that dividing data into subgroups, which can often identify susceptible subpopulations, is also a decision, and increases the chances of a spurious error. Smaller populations are also more prone to random associations. In the end, Young noted, by the time you reach 61 tests, there's a 95 percent chance that you'll get a significant result at random. And, let's face it—researchers want to see a significant result, so there's a strong, unintentional bias towards trying different tests until something pops out. Young went on to describe a study, published in JAMA, that was a multiple testing train wreck: exposures to 275 chemicals were considered, 32 health outcomes were tracked, and 10 demographic variables were used as controls. That was about 8,800 different tests, and as many as 9 million ways of looking at the data once the demographics were considered.
  •  
    It's possible to get the mental equivalent of whiplash from the latest medical findings, as risk factors are identified one year and exonerated the next. According to a panel at the American Association for the Advancement of Science, this isn't a failure of medical research; it's a failure of statistics, and one that is becoming more common in fields ranging from genomics to astronomy. The problem is that our statistical tools for evaluating the probability of error haven't kept pace with our own successes, in the form of our ability to obtain massive data sets and perform multiple tests on them. Even given a low tolerance for error, the sheer number of tests performed ensures that some of them will produce erroneous results at random.
1 - 5 of 5
Showing 20 items per page