Skip to main content

Home/ science/ Group items tagged statistics

Rss Feed Group items tagged

Skeptical Debunker

We're so good at medical studies that most of them are wrong - 0 views

  • Statistical validation of results, as Shaffer described it, simply involves testing the null hypothesis: that the pattern you detect in your data occurs at random. If you can reject the null hypothesis—and science and medicine have settled on rejecting it when there's only a five percent or less chance that it occurred at random—then you accept that your actual finding is significant. The problem now is that we're rapidly expanding our ability to do tests. Various speakers pointed to data sources as diverse as gene expression chips and the Sloan Digital Sky Survey, which provide tens of thousands of individual data points to analyze. At the same time, the growth of computing power has meant that we can ask many questions of these large data sets at once, and each one of these tests increases the prospects than an error will occur in a study; as Shaffer put it, "every decision increases your error prospects." She pointed out that dividing data into subgroups, which can often identify susceptible subpopulations, is also a decision, and increases the chances of a spurious error. Smaller populations are also more prone to random associations. In the end, Young noted, by the time you reach 61 tests, there's a 95 percent chance that you'll get a significant result at random. And, let's face it—researchers want to see a significant result, so there's a strong, unintentional bias towards trying different tests until something pops out. Young went on to describe a study, published in JAMA, that was a multiple testing train wreck: exposures to 275 chemicals were considered, 32 health outcomes were tracked, and 10 demographic variables were used as controls. That was about 8,800 different tests, and as many as 9 million ways of looking at the data once the demographics were considered.
  •  
    It's possible to get the mental equivalent of whiplash from the latest medical findings, as risk factors are identified one year and exonerated the next. According to a panel at the American Association for the Advancement of Science, this isn't a failure of medical research; it's a failure of statistics, and one that is becoming more common in fields ranging from genomics to astronomy. The problem is that our statistical tools for evaluating the probability of error haven't kept pace with our own successes, in the form of our ability to obtain massive data sets and perform multiple tests on them. Even given a low tolerance for error, the sheer number of tests performed ensures that some of them will produce erroneous results at random.
Janos Haits

Free data, statistics, analysis, visualization & sharing - knoema.com - 0 views

  •  
    "Smarter Research With All Statistics In Your Hands"
Janos Haits

World and regional statistics, national data, maps, rankings - 0 views

  •  
    World Data Atlas World and regional statistics, national data, maps, rankings 370M+timeseries: 960+topics: 900+sources: Take a look at data coverage matrix by country or topic to see the full picture!
thinkahol *

Quantum magic trick shows reality is what you make it - physics-math - 22 June 2011 - N... - 2 views

  •  
    In 1967, Simon Kochen and Ernst Specker proved mathematically that even for a single quantum object, where entanglement is not possible, the values that you obtain when you measure its properties depend on the context. So the value of property A, say, depends on whether you chose to measure it with property B, or with property C. In other words, there is no reality independent of the choice of measurement. It wasn't until 2008, however, that Alexander Klyachko of Bilkent University in Ankara, Turkey, and colleagues devised a feasible test for this prediction. They calculated that if you repeatedly measured five different pairs of properties of a quantum particle that was in a superposition of three states, the results would differ for the quantum system compared with a classical system with hidden variables. That's because quantum properties are not fixed, but vary depending on the choice of measurements, which skews the statistics. "This was a very clever idea," says Anton Zeilinger of the Institute for Quantum Optics, Quantum Nanophysics and Quantum Information in Vienna, Austria. "The question was how to realise this in an experiment." Now he, Radek Lapkiewicz and colleagues have realised the idea experimentally. They used photons, each in a superposition in which they simultaneously took three paths. Then they repeated a sequence of five pairs of measurements on various properties of the photons, such as their polarisations, tens of thousands of times. A beautiful experiment They found that the resulting statistics could only be explained if the combination of properties that was tested was affecting the value of the property being measured. "There is no sense in assuming that what we do not measure about a system has [an independent] reality," Zeilinger concludes.
Janos Haits

World and regional statistics, national data, maps, rankings - 0 views

  •  
    World Data Atlas. World and regional statistics, national data, maps, rankings. 370M+timeseries: 960+topics: 900+sources: Take a look at data coverage matrix by country or topic to see the full picture!
Janos Haits

GraphLab | Large-Scale Machine Learning - 0 views

  •  
    On top of our platform we have developed new sophisticated statistical models and machine learning algorithms for a wide range of applications including product targeting, market segmentation, community detection, network security, text analysis, and computer vision. These models enable our customers extract more value from their data and better understand and respond to a rapidly evolving world.
Janos Haits

GeoGebra - 0 views

  •  
    "THE GRAPHING CALCULATOR FOR FUNCTIONS, GEOMETRY, ALGEBRA, CALCULUS, STATISTICS AND 3D MATH!"
Janos Haits

CHB - 0 views

  •  
    Come work with us Interested in working with researchers from different disciplines within the Harvard, MIT and Broad community and an unique opportunity to participate in world-class research to make an impact on human health? Come work with us! We are looking for a computational biologists to handle data from a wide variety of experimental methods, focusing on next-gen sequencing technologies. Keep Reading...  SCDE is live The Stem Cell Discovery Engine (SCDE) is an integrated platform that allows users to consistently describe, share and compare cancer and tissue stem cell data. It is made up of an online database of curated experiments coupled to a customized instance of the Galaxy analysis engine with tools for gene list manipulation and molecular profile comparisons. The SCDE currently contains more than 50 stem cell-related experiments. Each has been manually curated and encoded using the ISA-Tab standard to ensure the quality of the data and its annotation. Keep Reading...  The Center for Health Bioinformatics at the Harvard School of Public Health provides consults to researchers for the management, integration and contextual analysis of biological high-throughput data. We are a member of the Center for Stem Cell Bioinformatics, the Environmental Statistics and Bioinformatics Core at the Harvard NIEHS Center for Environmental Health and the Genetics & Bioinformatics Consulting group for Harvard Catalyst and work closely with our colleagues in the Department of Biostatistics and the Program in Quantitative Genomics to act as a single point of contact for computational biology,
Erich Feldmeier

Noise and Signal - Nassim Taleb | Farnam Street - 0 views

  •  
    "There is a biological story with information. I have been repeating that in a natural environment, a stressor is information. So too much information would be too much stress, exceeding the threshold of antifragility. In medicine, we are discovering the healing powers of fasting, as the avoidance of too much hormonal rushes that come with the ingestion of food. Hormones convey information to the different parts of our system and too much of it confuses our biology. Here again, as with the story of the news received at too high a frequency, too much information becomes harmful. And in Chapter x (on ethics) I will show how too much data (particularly when sterile) causes statistics to be completely meaningless. Now let's add the psychological to this: we are not made to understand the point, so we overreact emotionally to noise. The best solution is to only look at very large changes in data or conditions, never small ones"
Janos Haits

NASA | Kasabi - 0 views

  •  
    This dataset consists of a conversion of the NASA NSSDC Master Catalog and extracts of the Apollo By Numbers statistics.
Erich Feldmeier

Social Media -  Christie Wilcox: Freelance Writer, Evolutionary Biologist - 0 views

  •  
    "If we are putting our time and resources into communicating science but we're not on social media, we're like a tree falling in an empty forest-yes, we're making noise, but no one is listening." "Only 17% of Americans can name a living scientist. That statistic crushes my heart.""
Janos Haits

Never Ending Image Learning - 0 views

  •  
    NEIL (Never Ending Image Learner) is a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. It is an effort to build the world's largest visual knowledge base with minimum human labeling effort - one that would be useful to many computer vision and AI efforts. See current statistics about how much NEIL knows about our world!!
Erich Feldmeier

@biogarage #SP-personality Cynthia Thomson: The Genetics of Being a Daredevil - NYTimes... - 0 views

  •  
    "And again, in this expanded group, she found the same association between the variation of the DRD4 gene and a willingness to take risks on the slopes. The variant's overall effect was slight, explaining only about 3 percent of the difference in behavior between risk takers and the risk averse, but was statistically significant and remained intact"
John Smith

Webinar On Statistical Analysis of Gages - 0 views

  •  
    The seminar begins with an examination of the fundamental vocabulary and concepts related to metrology. Topics include: accuracy, precision, calibration, and "uncertainty ratios". Several of the standard methods for analyzing measurement variation are then described and explained, as derived from AIAG's Measurement System Analysis reference book. The methods include: Gage R&R (ANOVA method, for 3 gages, 3 persons, 3 replicates, and 10 parts), Gage Correlation (for 3 gages), Gage Linearity, and Gage Bias. The seminar ends with an explanation of how to combine all relevant uncertainty information into an "Uncertainty Budget" that helps determine the appropriate width of QC specification intervals (i.e., "guard-banded specifications"). Spreadsheets are used to demonstrate how to perform the methods described during the seminar.
Skeptical Debunker

Use of DNA evidence is not an open and shut case, professor says - 0 views

  • In his new book, "The Double Helix and the Law of Evidence" (Harvard University Press), Kaye focuses on the intersection of science and law, and emphasizes that DNA evidence is merely information. "There's a popular perception that with DNA, you get results," Kaye said. "You're either guilty or innocent, and the DNA speaks the truth. That goes too far. DNA is a tool. Perhaps in many cases it's open and shut, in other cases it's not. There's ambiguity."
  • One of the book's key themes is that using science in court is hard to do right. "It requires lawyers and judges to understand a lot about the science," Kaye noted. "They don't have to be scientists or technicians, but they do have to know enough to understand what's going on and whether the statements that experts are making are well-founded. The lawyers need to be able to translate that information into a form that a judge or a jury can understand." Kaye also believes that lawyers need to better understand statistics and probability, an area that has traditionally been neglected in law school curricula. His book attempts to close this gap in understanding with several sections on genetic science and probability. The book also contends that scientists, too, have contributed to the false sense of certainty, when they are so often led by either side of one particular case to take an extreme position. Scientists need to approach their role as experts less as partisans and more as defenders of truth. Aiming to be a definitive history of the use of DNA evidence, "The Double Helix and the Law of Evidence" chronicles precedent-setting criminal trials, battles among factions of the scientific community and a multitude of issues with the use of probability and statistics related to DNA. From the Simpson trial to the search for the last Russian Tsar, Kaye tells the story of how DNA science has impacted society. He delves into the history of the application of DNA science and probability within the legal system and depicts its advances and setbacks.
  •  
    Whether used to clinch a guilty verdict or predict the end of a "CSI" episode, DNA evidence has given millions of people a sense of certainty -- but the outcomes of using DNA evidence have often been far from certain, according to David Kaye, Distinguished Professor of Law at Penn State.
thinkahol *

Face Research Lab » Abstracts - 0 views

  •  
    Recent formulations of sexual selection theory emphasise how mate choice can be affected by environmental factors, such as predation risk and resource quality. Women vary greatly in the extent to which they prefer male masculinity and this variation is hypothesised to reflect differences in how women resolve the trade-off between the costs (e.g., low investment) and benefits (e.g., healthy offspring) associated with choosing a masculine partner. A strong prediction of this trade-off theory is that women's masculinity preferences will be stronger in cultures where poor health is particularly harmful to survival. We investigated the relationship between women's preferences for male facial masculinity and a health index derived from World Health Organization statistics for mortality rates, life expectancies, and the impact of communicable disease. Across 30 countries, masculinity preference increased as health decreased. This relationship was independent of cross-cultural differences in wealth or women's mating strategies. These findings show non-arbitrary cross-cultural differences in facial attractiveness judgments and demonstrate the utility of trade-off theory for investigating cross-cultural variation in women's mate preferences.
Erich Feldmeier

Vlastimil Hart: Frontiers in Zoology | Abstract | Dogs are sensitive to small variation... - 0 views

  •  
    "We measured the direction of the body axis in 70 dogs of 37 breeds during defecation (1,893 observations) and urination (5,582 observations) over a two-year period. After complete sampling, we sorted the data according to the geomagnetic conditions prevailing during the respective sampling periods. Relative declination and intensity changes of the MF during the respective dog walks were calculated from daily magnetograms. Directional preferences of dogs under different MF conditions were analyzed and tested by means of circular statistics. Results Dogs preferred to excrete with the body being aligned along the North-south axis under calm MF conditions. This directional behavior was abolished under Unstable MF. The best predictor of the behavioral switch was the rate of change in declination, i.e., polar orientation of the MF. "
Janos Haits

Welcome // | DiRT Directory - 0 views

  •  
    "The DiRT Directory is a registry of digital research tools for scholarly use. DiRT makes it easy for digital humanists and others conducting digital research to find and compare resources ranging from content management systems to music OCR, statistical analysis packages to mindmapping software."
Janos Haits

CTRL-Labs - 0 views

  •  
    "CTRL-Labs dedicates itself to answering the biggest questions in computing, neuroscience, and design so creators can dream. Our work to build a transformative brain-machine interface spans research and challenges at the intersection of computational neuroscience, statistics, machine learning, biophysics, hardware, and human-computer interaction."
1 - 20 of 33 Next ›
Showing 20 items per page