Skip to main content

Home/ Long Game/ Group items tagged methodology

Rss Feed Group items tagged

anonymous

StratFor's Methodology - 0 views

  • The Intelligence Process
  • Love of One's Own and the Importance of Place
  • We seek to understand a country and its leaders in their own right, without bias or agenda. We maintain a fresh perspective and continually challenge preconceived notions. Because of this approach, we frequently depart from the conventional wisdom of the Western media. To reinforce this discipline, we have set up deliberate intellectual tensions to maintain a healthy level of interaction and rigorous debate among our entire team, so that no assumption or piece of information goes unchallenged.
  • ...1 more annotation...
    • anonymous
       
      In light of Wikileaks' revelation that many members of stratfor lean pretty hard right, I think it's a great opportunity for them to better describe the "intellectual tensions" and how their process diffuses inherent biases (which, even with the best of intentions, will crawl its way in). This is still something I eagerly await.
  •  
    "Stratfor's methodology begins with a framework for understanding the world and applies methods of gathering and analyzing information. The combination allows us to produce dispassionate, accurate and actionable insight for our clients and subscribers."
anonymous

Methodology | Stratfor - 0 views

  • We study the way in which geography and other forces constrain and shape people and nations. By analyzing the forces that affect world leaders, we can understand and often predict their actions and behaviors, which are far more limited than they might otherwise appear.
  • While the media concentrates on the subjective desires of leaders voiced at press conferences, Stratfor concentrates on the various constraints upon their behavior -- geographical, political, economic -- that are concrete but never admitted to publicly. Geopolitics allows us to place an event or action within a larger framework so that we can determine its potential significance, as well as identify connections among seemingly disparate trends.
  • Reports that showcase our geopolitical framework: Love of One's Own and the Importance of Place
    • anonymous
       
      This is an *invaluable* look at the phenomenon of nationalism.
  • ...7 more annotations...
  • The Intelligence Process
  • Intelligence means three things to us.
  • First, it is our method for gathering and processing information, which includes open-source publications in countries and languages all over the world and a large network of contacts.
  • Second, intelligence is how we critically examine and evaluate the context and predictive value of information, and it is how we connect our higher-level, strategic geopolitical framework to current events and breaking developments.
  • Third, we maintain a disciplined methodology and net assessments oriented toward forecasting -- explaining not only why something has happened but also what will happen next.
  • We seek to understand a country and its leaders in their own right, without bias or agenda. We maintain a fresh perspective and continually challenge preconceived notions. Because of this approach, we frequently depart from the conventional wisdom of the Western media. To reinforce this discipline, we have set up deliberate intellectual tensions to maintain a healthy level of interaction and rigorous debate among our entire team, so that no assumption or piece of information goes unchallenged.
  • Reports that showcase empathetic analysis: Thinking About the Unthinkable: A U.S.-Iranian Deal Germany's Choice Hezbollah, Radical but Rational
  •  
    "Stratfor's methodology begins with a framework for understanding the world and applies methods of gathering and analyzing information. The combination allows us to produce dispassionate, accurate and actionable insight for our clients and subscribers."
anonymous

Why Stratfor Tracks the Locations of U.S. Navy Capital Ships - 0 views

  • Roughly 90 percent of trade worldwide happens by sea, so the global economy depends on safe maritime transport. Unimpeded access to the seas is also necessary for the defense of far-flung national interests.
  • The United States could not have fought in World War II or Afghanistan, for example, without the ability to quickly move forces, supplies and aircraft to distant corners of the globe. Thus, the movement of U.S. Navy ships can tell us a lot about America's foreign policy.
  • For example, take the U.S. execution of sanctions on Iran in recent years. In April 2012, the Navy positioned a second carrier battle group in the Strait of Hormuz, thus sending a message to Iran that the United States is ready to respond if aggressive action is taken to close the strait.
  • ...6 more annotations...
  • During the Israel-Gaza conflict in November, the Navy diverted an amphibious group to the eastern Mediterranean, making it available to evacuate U.S. citizens if needed.
  • The key is looking for the unexpected.
  • Each week, we have a pretty good idea of where the ships will be, based on geopolitical patterns, strategies and developments. But surprises -- and they do happen -- allow us to challenge and re-evaluate our positions.
  • This is a fundamental part of our methodology: constantly checking our net assessments against new intelligence to maintain the high degree of accuracy on which our readers and clients depend.
    • anonymous
       
      Their methodology is high-level. I'm still not satisfied that I have an accurate picture. Moreover, I'm on the lookout for tools to help me critique their work. I realize that's almost *cute* given that I'm a layman, but I'm pretty sure I can leverage the internet to at LEAST ask some interesting questions.
  • The map contains only publicly available, open-source information. We're not publishing any secrets; we're compiling available information and applying it to an easy-to-use, actionable graphic for our analysts, subscribers and clients.
  •  
    "In Stratfor's weekly Naval Update Map, we track the approximate locations of U.S. fleet aircraft carriers and amphibious war ships. The map helps our analysts -- and customers -- decipher Washington's strategy and even predict looming conflicts."
anonymous

Why Climate Deniers Have No Scientific Credibility - In One Pie Chart - 0 views

  • I searched the Web of Science for peer-reviewed scientific articles published between 1 January 1991 and 9 November 2012 that have the keyword phrases "global warming" or "global climate change." The search produced 13,950 articles. See methodology.
  • Of one thing we can be certain: had any of these articles presented the magic bullet that falsifies human-caused global warming, that article would be on its way to becoming one of the most-cited in the history of science.
  • Global warming deniers often claim that bias prevents them from publishing in peer-reviewed journals. But 24 articles in 18 different journals, collectively making several different arguments against global warming, expose that claim as false. Articles rejecting global warming can be published, but those that have been have earned little support or notice, even from other deniers.
  • ...2 more annotations...
  • Anyone can repeat this search and post their findings. Another reviewer would likely have slightly different standards than mine and get a different number of rejecting articles. But no one will be able to reach a different conclusion, for only one conclusion is possible: Within science, global warming denial has virtually no influence. Its influence is instead on a misguided media, politicians all-too-willing to deny science for their own gain, and a gullible public.
  • Scientists do not disagree about human-caused global warming. It is the ruling paradigm of climate science, in the same way that plate tectonics is the ruling paradigm of geology. We know that continents move. We know that the earth is warming and that human emissions of greenhouse gases are the primary cause. These are known facts about which virtually all publishing scientists agree.
  •  
    "Polls show that many members of the public believe that scientists substantially disagree about human-caused global warming. The gold standard of science is the peer-reviewed literature. If there is disagreement among scientists, based not on opinion but on hard evidence, it will be found in the peer-reviewed literature."
anonymous

Abstract Science - 5 views

  •  
    "Scientific abstracts are the hooks attempting to capture a discerning reader's attention, the shortcuts saving the busy reader some time and the keys unlocking scientific knowledge for those lacking a portfolio of academic journal subscriptions. But don't be dismayed if you're still confused after reading an abstract multiple times. When writing this leading, summarizing paragraph of a scientific manuscript, researchers often make mistakes. Some authors include too much information about the experimental methods while forgetting to announce what they actually discovered. Others forget to include any methodology at all. Sometimes the scientists fail to divulge why they even conducted the study in the first place, yet feel comfortable boldly speculating with a loose-fitting claim of general importance. Nevertheless, the abstract serves a critical importance and every science enthusiast needs to become comfortable with reading them."
  • ...4 more comments...
  •  
    Took at class (well, more than one) with the UChicago professional writing program (http://writing-program.uchicago.edu/courses/index.htm). There was a lot of hammering this home to the writers of those abstracts, too. We've got all these forms, and it's not always clear to the reader or writer what's expected of those forms. This does not lead to effective communication.
  •  
    Too true. Sadly, it's a lesson that still lost on some pretty senior P.I.'s, who think that 'lay summary' means simply spelling out all their acronyms.
  •  
    Honestly, this can be really hard and time-intensive work for some people. Some people understand what they need to do, but end up taking (usually very jargon-filled) shortcuts. I understand that, but I also know that it gets faster and easier with practice.
  •  
    Or hire an editor.
  •  
    It would be interesting to see how much purchase a suggestion like that receives. I suspect more than a few PI's would find the notion insulting because they've been doing it for years, and some of these really technical publications have been tolerating it for so long. For my part as an Admin, I would review the lay summary and give my impressions, which would then get (mostly) completely ignored. :)
  •  
    A _lot_ of people don't think they need professional writing and editing help. After all, they learned to write years ago.
anonymous

You Broke Peer Review. Yes, I Mean You | Code and Culture - 0 views

  • no more anonymous co-authors making your paper worse with a bunch of non sequiturs or footnotes with hedging disclaimers.
  • The thing is though that optimistic as I am about the new journal, I don’t think it will replace the incumbent journals overnight and so we still need to fix review at the incumbent journals.
  • So fixing peer review doesn’t begin with you, the author, yelling at your computer “FFS reviewer #10, maybe that’s how you would have done it, but it’s not your paper”
  • ...32 more annotations...
  • Nor, realistically, can fixing peer review happen from the editors telling you to go ahead and ignore comments 2, 5, and 6 of reviewer #6.
  • First, it would be an absurd amount of work
  • Second, from the editor’s perspective the chief practical problem is recruiting reviewers
  • they don’t want to alienate the reviewers by telling them that half their advice sucks in their cover letter
  • Rather, fixing peer review has to begin with you, the reviewer, telling yourself “maybe I would have done it another way myself, but it’s not my paper.”
  • You need to adopt a mentality of “is it good how the author did it” rather than “how could this paper be made better” (read: how would I have done it). That is the whole of being a good reviewer, the rest is commentary. That said, here’s the commentary.
  • Do not brainstorm
  • Responding to a research question by brainstorming possibly relevant citations or methods
  • First, many brainstormed ideas are bad.
  • When I give you advice as a peer reviewer there is a strong presumption that you take the advice even if it’s mediocre
  • Second, many brainstormed ideas are confusing.
  • When I give you advice in my office you can ask follow-up questions
  • When I give advice as a peer reviewer it’s up to you to hope that you read the entrails in a way that correctly augurs the will of the peer reviewers.
  • Being specific has the ancillary benefit that it’s costly to the reviewer which should help you maintain the discipline to thin the mindfart herd stampeding into the authors’ revisions.
  • Third, ideas are more valuable at the beginning of a project than at the end of it.
  • When I give you advice about your new project you can use it to shape the way the project develops organically. When I give it to you as a reviewer you can only graft it on after the fact.
  • it is essential to keep in mind that no matter how highly you think of your own expertise and opinions, you remember that the author doesn’t want to hear it.
  • time is money. It usually takes me an amount of time that is at least the equivalent of a course release to turn-around an R&R and at most schools a course release in turn is worth about $10,000 to $30,000 if you’re lucky enough to raise the grants to buy them.
  • Distinguish demands versus suggestions versus synapses that happened to fire as you were reading the paper
  • A lot of review comments ultimately boil down to some variation on “this reminds me of this citation” or “this research agenda could go in this direction.” OK, great. Now ask yourself, is it a problem that this paper does not yet do these things or are these just possibilities you want to share with the author?
  • As a related issue, demonstrate some rhetorical humility.
  • There’s wrong and then there’s difference of opinion
  • On quite a few methodological and theoretical issues there is a reasonable range of opinion. Don’t force the author to weigh in on your side.
  • For instance, consider Petev ASR 2013. The article relies heavily on McPherson et al ASR 2006, which is an extremely controversial article (see here, here, and here).
  • One reaction to this would be to say the McPherson et al paper is refuted and ought not be cited. However Petev summarizes the controversy in footnote 10 and then in footnote 17 explains why his own data is a semi-independent (same dataset, different variables) corroboration of McPherson et al.
  • These footnotes acknowledge a nontrivial debate about one of the article’s literature antecedents and then situates the paper within the debate.
  • Theoretical debates are rarely an issue of decisive refutation or strictly cumulative knowledge but rather at any given time there’s a reasonable range of opinions and you shouldn’t demand that the author go with your view but at most that they explore its implications if they were to.
  • There are cases where you fall on one side of a theoretical or methodological gulf and the author on another to the extent that you feel that you can’t really be fair.
  • you as the reviewer have to decide if you’re going to engage in what philosophers of science call “the demarcation problem” and sociologists of science call “boundary work” or you’re going to recuse yourself from the review.
  • Don’t try to turn the author’s theory section into a lit review.
  • The theory section is not about demonstrating basic competence or reciting a creedal confession and so it does not need to discuss every book or article ever published on the subject or even just the things important enough to appear on your graduate syllabus or field exam reading list.
  • If the submission reminds you of a citation that’s relevant to the author’s subject matter, think about whether it would materially affect the argument.
  •  
    "I'm as excited as anybody about Sociological Science as it promises a clean break from the "developmental" model of peer review by moving towards an entirely evaluative model. That is, no more anonymous co-authors making your paper worse with a bunch of non sequiturs or footnotes with hedging disclaimers. (The journal will feature frequent comment and replies, which makes debate about the paper a public dialog rather than a secret hostage negotiation). The thing is though that optimistic as I am about the new journal, I don't think it will replace the incumbent journals overnight and so we still need to fix review at the incumbent journals."
anonymous

Geopolitical Intelligence, Political Journalism and 'Wants' vs. 'Needs' - 2 views

  • At Stratfor, the case is frequently the opposite: Our readers typically are expert in the topics we study and write about, and our task is to provide the already well-informed with further insights. But the question is larger than that.
  • We co-exist in this ecosystem, but geopolitical intelligence is scarcely part of the journalistic flora and fauna. Our uniqueness creates unique challenges
  • Instead, let's go to the core dynamic of the media in our age and work back outward through the various layers to what we do in the same virtual space, namely, intelligence.
  • ...17 more annotations...
  • You could get the same information with a week's sorting of SEC filings. But instead, you have just circumvented that laborious process by going straight to just one of the "meta-narratives" that form the superstructure of journalism.
  • Meta-Narratives at Journalism's Core Welcome to the news media's inner core.
  • For the fundamental truth of news reporting is that it is constructed atop pre-existing narratives comprising a subject the reader already knows or expects, a description using familiar symbolism often of a moral nature, and a narrative that builds through implicit metaphor from the stories already embedded in our culture and collective consciousness.
  • The currency of language really is the collection of what might be called the "meta-stories."
  • There's nothing wrong with this. For the art of storytelling -- journalism, that is -- is essentially unchanged from the tale-telling of Neolithic shamans millennia ago up through and including today's New York Times. Cultural anthropologists will explain that our brains are wired for this. So be it.
  • We at Stratfor may not "sync up." Journalists certainly do.
  • Meta-Narratives Meet Meta-Data There is nothing new in this; it is a process almost as old as the printing press itself. But where it gets particularly new and interesting is with my penultimate layer of difference, the place where meta-narratives meet meta-data.
  • "Meta-data," as the technologists call it, is more simply understood as "data about data."
  • Where the online battle for eyeballs becomes truly epic, however, (Google "the definition of epic" for yet another storyteller's meta-story) is when these series of tags are organized into a form of meta-data called a "taxonomy."
  • And thus we arrive at the outermost layer of the media's skin in our emerging and interconnected age. This invisible skin over it all comes in the form of a new term of art, "search engine optimization," or in the trade just "SEO."
  • With journalists already predisposed by centuries of convention to converge on stories knitted from a common canon, the marriage of meta-narrative and meta-data simply accelerates to the speed of light the calibration of topic and theme.
  • If a bit simplified, these layers add up to become the connective tissue in a media-centric and media-driven age. Which leads me back to the original question of why Stratfor so often "fails to sync up with the media."
  • For by the doctrines of the Internet's new commercial religion, a move disrupting the click stream was -- and is -- pure heresy. But our readers still need to know about Colombia, just as they need our unique perspectives on Syria.
  • Every forecast and article we do is essentially a lab experiment, in which we put the claims of politicians, the reports on unemployment statistics, the significance of a raid or a bombing to the test of geopolitics.
  • We spend much more time studying the constraints on political actors -- what they simply cannot do economically, militarily or geographically -- than we do examining what they claim they will do.
  • The key characteristic to ponder here is that such methodology -- intelligence, in this case -- seeks to enable the acquisition of knowledge by allowing reality to speak for itself. Journalism, however, creates a reality atop many random assumptions through the means described. It is not a plot, a liberal conspiracy or a secret conservative agenda at work, as so many media critics will charge. It is simply the way the media ecosystem functions. 
  • Journalism, in our age more than ever before, tells you what you want to know. Stratfor tells you what you need to know. 
  •  
    "Just last week, the question came again. It is a common one, sometimes from a former colleague in newspaperdom, sometimes from a current colleague here at Stratfor and often from a reader. It is always to the effect of, "Why is Stratfor so often out of sync with the news media?" All of us at Stratfor encounter questions regarding the difference between geopolitical intelligence and political journalism. One useful reply to ponder is that in conventional journalism, the person providing information is presumed to know more about the subject matter than the reader. At Stratfor, the case is frequently the opposite: Our readers typically are expert in the topics we study and write about, and our task is to provide the already well-informed with further insights. But the question is larger than that."
  •  
    Excuse me while I guffaw. Stratfor is not the first to claim that they're the only ones not swayed by financial factors. Stratfor has its own metanarratives (especially geographic determinism) as much as anyone else does.
anonymous

Look at This Visualization of Drone Strike Deaths - 0 views

  • The data is legit; it comes from the Bureau of Investigative Journalism, but as Emma Roller at Slate notes, the designers present it weirdly, claiming at the beginning of the interactive that fewer than 2 percent of drone deaths have been "high profile targets," and "the rest are civilians, children and alleged combatants." At the end of the visualization, you find out that a majority of the deaths fall into the "legal gray zone created by the uncertainties of war," as Brian Fung put it at National Journal.
  • But the "legal gray zone" itself is alarming enough—highlighting the lack of transparency surrounding the administration's drone program—as are the discrepancies in total numbers killed. It's between 2,537 and 3,581 (including 411 to 884 civilians) killed since 2004, if you want to go with the BIJ. Or it's between 1,965 and 3,295 people since 2004 (and 261 to 305 civilians), if you want to believe the Counterterrorism Strategy Initiative at the New America Foundation. Or perhaps it's 2,651 since 2006 (including 153 civilians), according to Long War Journal. (The NAF and Long War Journal base estimates on press reports. BIJ also includes deaths reported to the US or Pakistani governments, military and intelligence officials, and other academic sources.)
  •  
    "Pitch Interactive, a California-based data visualization shop, has created a beautiful, if somewhat controversial, visualization of every attack by the US and coalition forces in Pakistan since 2004." Fucking sobering.
anonymous

Invelox wind turbine claims 600% advantage in energy output - 0 views

shared by anonymous on 14 Jun 13 - Cached
  • Invelox takes a novel approach to wind power generation as it doesn’t rely on high wind speeds. Instead, it captures wind at any speed, even a breeze, from a portal located above ground. The wind captured is then funneled through a duct where it will pick up speed. The resulting kinetic energy will drive the generator on the ground level. By bringing the airflow from the top of the tower, it’s possible to generate more power with smaller turbine blades, SheerWind says.
  • As to the sixfold output claim, as with many new technologies promising a performance breakthrough, it needs to be viewed with caution. SheerWind makes the claim based on its own comparative tests, the precise methodology of which is not entirely clear.
  • Besides power performance and the fact it can operate at wind speeds as low as 1 mph, SheerWind says Invelox costs less than US$750 per kilowatt to install. It is also claimed that operating costs are significantly reduced compared to traditional turbine technology. Due to its reduced size, the system is supposedly safer for birds and other wildlife, concerns that also informed the designers of the Ewicon bladeless turbine.
  •  
    "SheerWind, a wind power company from Minnesota, USA, has announced the results of tests it has carried out with its new Invelox wind power generation technology. The company says that during tests its turbine could generate six times more energy than the amount produced by traditional turbines mounted on towers. Besides, the costs of producing wind energy with Invelox are lower, delivering electricity with prices that can compete with natural gas and hydropower."
anonymous

Pundit Forecasts All Wrong, Silver Perfectly Right. Is Punditry Dead? | TechCrunch - 1 views

  • Silver’s analysis, and statistical models generally, factor in more data points than even the most knowledgeable political insider could possibly juggle in their working memory. His model incorporates the size, quality, and recency of all polls, and weights them based on the polling firm’s past predictive success (among other more advanced statistical procedures).
  • Silver’s methods present a dilemma for television networks. First, viewers would have to be a math geek to follow along in the debates. Even if networks replaced their pundits with competitor statisticians, the only way to compare forecasts would be to argue over nuanced statistical techniques. People may say they’re fans of Silver, but just wait until every political network is fighting over their own complex model and see how inaccessible election prediction becomes to most viewers.
  • Second, there’s no more rating-spiking shocking polls. Usually, the most surprising polls, which garner headlines, are the most inaccurate. Instead, in Silver’s universe, we’ll follow polling averages, with steadily (read: boringly) ebb and wane in relatively predictable directions.
  • ...2 more annotations...
  • But, perhaps the most devastating impact on traditional punditry: politics and campaigning has a relatively small impact on elections. According to Silver’s model, Obama had a strong likelihood of winning several months before the election. Elections favor incumbents and Romney was an uncharismatic opponent, who wasn’t all that well liked even within his own party. Other influential factors, such as the economy, are completely outside the control of campaigns. The economy picked up before the election. Any conservative challenger had an uphill battle.
  • So, all the bluster about Americans not connecting with Obama or his “radical” social agenda is just hot air. Most of the pundit commentary that fills up airtime in the 24 hour news cycle is, politically speaking, mostly inconsequential.
  •  
    "The New York Times election statistician, Nate Silver, perfectly predicted all 50 states last night for President Obama, while every single major pundit was wrong-some comically wrong. Despite being derided by TV talking heads as a liberal hack, Silver definitively proved that geeks with mathematical models were superior to the gut feelings and pseudo-statistics of so-called political experts. The big question is, will the overwhelming success of statistical models make pundit forecasting obsolete, or will producers stubbornly keep them on the air?"
anonymous

Lies, Damned Lies, and Medical Science - 0 views

  • or whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names.
  • One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good?
  • Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began?
  • ...33 more annotations...
  • Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile.
  • That question has been central to Ioannidis’s career. He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research.
  • He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong.
  • He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.
  • “I take all the researchers who visit me here, and almost every single one of them asks the tree the same question,” Ioannidis tells me, as we contemplate the tree the day after the team’s meeting. “‘Will my research grant be approved?’” He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack. And sure enough, he goes on to suggest that an obsession with winning funding has gone a long way toward weakening the reliability of medical research.
  • “I assumed that everything we physicians did was basically right, but now I was going to help verify it,” he says. “All we’d have to do was systematically review the evidence, trust what it told us, and then everything would be perfect.” It didn’t turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science “never minds” are hardly secret. And they sometimes make headlines, as when in recent years large studies or growing consensuses of researchers concluded that mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; or when widely prescribed antidepressants such as Prozac, Zoloft, and Paxil were revealed to be no more effective than a placebo for most cases of depression; or when we learned that staying out of the sun entirely can actually increase cancer risks; or when we were told that the advice to drink lots of water during intense exercise was potentially fatal; or when, last April, we were informed that taking fish oil, exercising, and doing puzzles doesn’t really help fend off Alzheimer’s disease, as long claimed. Peer-reviewed studies have come to opposite conclusions on whether using cell phones can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.
  • “I realized even our gold-standard research had a lot of problems,” he says.
  • This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
  • Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research.
  • In 2005, he unleashed two papers that challenged the foundations of medical research.
  • He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time.
  • The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process—in which journals ask researchers to help decide which studies to publish—to suppress opposing views.
  • sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim.
  • Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.
  • When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have pretty good reason to take more vitamin X, and physicians routinely pass these recommendations on to patients. But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.
  • the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up.
  • But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you.
  • nd these problems are aside from ubiquitous measurement errors (for example, people habitually misreport their diets in studies), routine misanalysis (researchers rely on complex software capable of juggling results in ways they don’t always understand), and the less common, but serious, problem of outright fraud (which has been revealed, in confidential surveys, to be much more widespread than scientists like to acknowledge).
  • And so it goes for all medical studies, he says. Indeed, nutritional studies aren’t the worst. Drug studies have the added corruptive force of financial conflict of interest. The exciting links between genes and various diseases and traits that are relentlessly hyped in the press for heralding miraculous around-the-corner treatments for everything from colon cancer to schizophrenia have in the past proved so vulnerable to error and distortion, Ioannidis has found, that in some cases you’d have done about as well by throwing darts at a chart of the genome.
  • Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it.
  • The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it.
  • Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested. Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case for at least 12 years after the results were discredited.
  • Medical research is not especially plagued with wrongness. Other meta-research experts have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right).
  • Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.” Ioannidis offers a theory for the relatively calm reception. “I think that people didn’t feel I was only trying to provoke them, because I showed that it was a community problem, instead of pointing fingers at individual examples of bad research,” he says. In a sense, he gave scientists an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it—it was something everyone else did.
  • The irony of his having achieved this sort of success by accusing the medical-research community of chasing after success is not lost on him, and he notes that it ought to raise the question of whether he himself might be pumping up his findings.
  • “If I did a study and the results showed that in fact there wasn’t really much bias in research, would I be willing to publish it?” he asks. “That would create a real psychological conflict for me.” But his bigger worry, he says, is that while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job. He fears he won’t in the end have done much to improve anyone’s health. “There may not be fierce objections to what I’m saying,” he explains. “But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”
  • What they’re not trained to do is to go back and look at the research papers that helped make these drugs the standard of care.
  • Tatsioni doesn’t so much fear that someone will carve out the man’s healthy appendix. Rather, she’s concerned that, like many patients, he’ll end up with prescriptions for multiple drugs that will do little to help him, and may well harm him. “Usually what happens is that the doctor will ask for a suite of biochemical tests—liver fat, pancreas function, and so on,” she tells me. “The tests could turn up something, but they’re probably irrelevant. Just having a good talk with the patient and getting a close history is much more likely to tell me what’s wrong.” Of course, the doctors have all been trained to order these tests, she notes, and doing so is a lot quicker than a long bedside chat. They’re also trained to ply the patient with whatever drugs might help whack any errant test numbers back into line.
  • patients often don’t even like it when they’re taken off their drugs, she explains; they find their prescriptions reassuring.
  • “Researchers and physicians often don’t understand each other; they speak different languages,” he says. Knowing that some of his researchers are spending more than half their time seeing patients makes him feel the team is better positioned to bridge that gap; their experience informs the team’s research with firsthand knowledge, and helps the team shape its papers in a way more likely to hit home with physicians.
  • Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding.
  • “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • being wrong in science is fine, and even necessary
  •  
    "Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors-to a striking extent-still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science." By David H. Freedman at The Atlantic on November 2010.
anonymous

Odds Are, It's Wrong - 0 views

  •  
    "It's science's dirtiest secret: The "scientific method" of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing." By Tom Siegfried at Science News on March 27, 2010.
anonymous

Fending Off Digital Decay, Bit by Bit - 0 views

  • As research libraries and archives are discovering, “born-digital” materials — those initially created in electronic form — are much more complicated and costly to preserve than anticipated.
  • archivists are finding themselves trying to fend off digital extinction at the same time that they are puzzling through questions about what to save, how to save it and how to make that material accessible.
  • Leslie Morris, a curator at the Houghton Library, said, “We don’t really have any methodology as of yet” to process born-digital material. “We just store the disks in our climate-controlled stacks, and we’re hoping for some kind of universal Harvard guidelines,” she added.
  • ...4 more annotations...
  • Mr. Rushdie started using a computer only when the Ayatollah Khomeini’s 1989 fatwa drove him underground. “My writing has got tighter and more concise because I no longer have to perform the mechanical act of re-typing endlessly,” he explained during an interview while in hiding. “And all the time that was taken up by that mechanical act is freed to think.”
  • At the Emory exhibition, visitors can log onto a computer and see the screen that Mr. Rushdie saw, search his file folders as he did, and find out what applications he used. (Mac Stickies were a favorite.) They can call up an early draft of Mr. Rushdie’s 1999 novel, “The Ground Beneath Her Feet,” and edit a sentence or post an editorial comment.
    • anonymous
       
      This is very cool. I'm intrigued by this sort of thing because central to my frustrations is the impossibility of understanding the zeitgeist of a period.
  • To the Emory team, simulating the author’s electronic universe is equivalent to making a reproduction of the desk, chair, fountain pen and paper that, say, Charles Dickens used, and then allowing visitors to sit and scribble notes on a copy of an early version of “Bleak House.”
  • The heart of the lab is the Forensic Recovery of Evidence Device, nicknamed FRED, which enables archivists to dig out data, bit by bit, from current and antiquated floppies, CDs, DVDs, hard drives, computer tapes and flash memories, while protecting the files from corruption.
anonymous

The Core Ideas of Science - 0 views

  • Here’s the web page for the report, a summary (pdf), and the report itself (pdf, free after you register).
  • The first category is “Scientific and Engineering Practices,” and includes such laudable concepts as ” Analyzing and interpreting data.”
  • The second category is “Crosscutting Concepts That Have Common Application Across Fields,” by which they mean things like “Scale, proportion, and quantity” or ” Stability and change.
  • ...2 more annotations...
  • The third category is the nitty-gritty, “Core Ideas in Four Disciplinary Areas,” namely “Physical Sciences,” “Life Sciences,” “Earth and Space Sciences,” and “Engineering, Technology, and the Applications of Science.”
  • Whether or not these concepts and the grander conceptual scheme actually turn out to be useful will depend much more on implementation than on this original formulation. The easy part is over, in other words. The four ideas above seem vague at first glance, but they are spelled out in detail in the full report, with many examples and very specific benchmarks.
  •  
    "A National Academy of Sciences panel, chaired by Helen Quinn, has released a new report that seeks to identify 'the key scientific practices, concepts and ideas that all students should learn by the time they complete high school.'" Conspicuously missing from Discover writeup: methodology. I'd pair that with critical thinking. Are either of those prime requirements, yet?
anonymous

The Economic Manhattan Project - 1 views

  • According to the organizers, "Concerns over the current financial situation are giving rise to a need to evaluate the very mathematics that underpins economics as a predictive and descriptive science. A growing desire to examine economics through the lens of diverse scientific methodologies — including physics and complex systems — is making way to a meeting of leading economists and theorists of finance together with physicists, mathematicians, biologists and computer scientists in an effort to evaluate current theories of markets and identify key issues that can motivate new directions for research."
  •  
    "After all, we are witnessing the Waterloo of Wall Street. So, ironically, it was in the Canadian province of Ontario, in the small town of Waterloo, that a meeting was convened to shed new light on the world's financial debacle. In a densely packed conference schedule, the general approach was to take measure of the crisis not only in a new way, but with instruments never used before. Even the venue for event, the Perimeter Institute for Theoretical Physics, was itself programmatic, though invitations to participate were sent far beyond the boundaries of economics and physics to mathematicians, lawyers, behavioral economists, risk managers, evolutionary biologists, complexity theorists and computer scientists.- Jordan Mejias, Frankfurter Allgemeine Zeitung"
anonymous

Rand & Aesthetics 20 - 2 views

  • It is like a moment of rest, a moment to gain fuel to move farther. Art gives him that fuel; the pleasure of contemplating the objectified reality of one’s own sense of life is the pleasure of feeling what it would be like to live in one’s ideal world.
    • anonymous
       
      Quote by Rand
  • I suspect that this statement explains more about Rand's aesthetics than any of Rand's specific theories about art.
    • anonymous
       
      Which is at the heart of what passes for her methodology.
  • Now while anyone may have as narrow (or as wide) aesthetic tastes as they please, in a philosopher of aesthetics, such prejudices are deeply problematic. How can a philosopher provide insights on aesthetics applicable to all (or at least most) individuals when their tastes are so confined within the narrow bounds of their own narcissistic agendas?
  • ...6 more annotations...
  • Don Quixote is a malevolent universe attack on all values as such. It belongs in the same class with two other books, which together make up the three books I hate most: Don Quixote, Anna Karenina, and Madame Bovary.
    • anonymous
       
      A general rule of thumb: Any books Ayn Rand hates are very likely classics worthy of your attention.
    • Erik Hanson
       
      Where is the "like" button on that note?
    • anonymous
       
      Hah. Thanks. I know my Rand-bashing is probably old to some of my peeps. I try to keep most off the radar, but as a recovered Objectivist, this is all very cathartic.
  • And by implication, anyone who admires and enjoys these three novels is also evil. Rand was not content merely to state her own likes and dislikes, however narrow and prejudiced these might have been; but she also had to attack and disparage those whose tastes differed from her own.
  • In going through Rand's aesthetic judgments, one can't help noticing how often Rand conflates her personal tastes with objective truth
  • Her "Objectivist" philosophy is really the most subjective of philosophies. It's all about her: her tastes, her emotions, her wants, her needs, all writ large in platonic letters across the heavens.
  • The standard of truth and morality in Objectivism is not "reason" or logic or fact; it is Ayn Rand herself. What Rand said is true is true, despite what all the great thinkers and scientists said before her. What Ayn Rand said is good or evil is good or evil, regardless of whatever natural needs may exist elsewhere in the universe. This explains, perhaps more than anything else, why Objectivsm so quickly degenerated into an Ayn Rand personality cult.
  • Rand claim to found her philosophy on the axiom existence exists; but it is really founded on the (implicit) axiom that equates Rand's thoughts and judgments with objective truth.
  •  
    Succinct, scathing, and a hell of a read. It's Ayn Rand as the brooding teenager figuring the universe out via scribbling passionate post-its and arranging them on a corkboard. She had it all figured out... It begins: Art as "fuel." For Rand, one of the primary objectives of art was to serve as a kind of spiritual sustenance or "fuel"
anonymous

America's Epidemic of Enlightened Racism - 0 views

  • the summary dismissal of the column – without substantive rebuttals to claims that are so racist as to seem to be beneath public discourse – means that he can play the role of victim of political correctness gone amok.
  • Derbyshire claims that his ideas are backed up by “methodological inquiries in the human sciences,” and includes links to sites that provide all the negative sociological data about black people you’d ever need to justify your fear of them, including the claim that “blacks are seven times more likely than people of other races to commit murder, and eight times more likely to commit robbery.”
  • So he can cast himself as someone who had the courage to tell it like it is – with all the sociological data backing him up – only to be punished for this by the reactionary hypocrites who control the public discourse.
  • ...25 more annotations...
  • Once again, he can tell himself, those quick to cry “racism” have prevented an honest conversation about race.
  • If Derbyshire were a lone crank, none of it would matter much. But he’s not.
  • they see them selves as advocates of a sort of enlightened racism that doesn’t shrink from calling a spade a spade but isn’t inherently unjust.
  • Enlightened racism is meant to escape accusations of being racist in the pejorative sense via two avenues: the first is the appeal to data I have just described. The second is a loophole to the effect that exceptions are to be made for individuals.
  • They could care less about skin color, they say; it really is the content of people’s characters that concerns them, and that content really does suffer more in blacks than whites.
  • Because they are so widespread and aim to restore the respectability of interracial contempt, these attempts at an enlightened racism deserve a rebuttal. Especially in light of the fact that those who hold such views often see themselves as the champions of reasons over sentiment, when in fact their views are deeply irrational.
  • First, a history of slavery, segregation, and (yes) racism, means that African American communities suffer from some social problems at higher rates than whites.
  • But that doesn’t change the fact that the majority of black people – statistically, and not just based on politically correct fuzzy thinking – are employed, not on welfare, have no criminal record, and so on and so forth.
  • So the kind of thinking that enlightened racists see as their way of staring a hard reality right in the face turns out to be just a silly rationalization using weak statistical differences.
  • In other words, one’s chances of being a victim of violent crime is already so low, that even accounting  for higher crime rates among African Americans, one’s chance of being a victim of violent crime by an African American remains very low.
  • The argument that Derbyshire and those like him make is that we are justified in treating an entire population as a threat – in essentially shunning them in the most degrading way – because one’s chances of being harmed by any given member of that population, while very low, is not quite as low as one’s chances of being harmed by the general population.
  • It’s an argument that starts out with sociological data and quickly collapses to reveal the obvious underlying motivation: unenlightened racism of the coarsest variety.
  • Second, there is the issue of character: because this, after all, is what really motivates these attempts at establishing an enlightened racism that gives individuals the benefit of the doubt while acknowledging the truth about general cultural differences.
  • I think it suffices to respond in the following way: people tend to mistake their discomfort with the cultural differences of a group with that group’s inferiority. (They also tend to conflate their political and economic advantages with psychological superiority).
  • If they respond with sociological data about education and birth rates and all the rest, we only have to respond that like crime rates, they’re exactly the sort of consequences one would expect from a history of oppression and even then fail to justify racist stereotypes.
  • The fact is, that where we pick a white person or black person at random, the same truths hold: they very likely have a high school diploma, and probably do not have a bachelor’s degree. They’re probably employed and not on welfare. They’ve probably never been to prison, and they almost certainly are not going to harm you. These are the broad statistical truths that simply do not vary enough between races to justify the usual stereotypes.
  • So here is the hard truth that advocates of enlightened racism need to face: their sociological data and ideas about black character, intelligence and morality are post-hoc rationalizations of their discomfort with average cultural differences between whites and blacks.
  • The fact that they have black friends and political heroes, or give individuals the benefit of the doubt as long as they are “well-socialized” and “intelligent” just means that they can suppress that discomfort if the cultural differences are themselves lessened to a tolerable degree.
  • And so they need to disabuse themselves of the idea that true, unenlightened racism is a term very narrowly defined: that it requires a personal hatred of individual black people based on their skin color despite evidence of redeeming personal qualities.
  • What they think of as redeeming personal qualities are just qualities that tend to make them less uncomfortable. But the hatred of black culture and post-hoc rationalizations of this hatred using sociological data are just what racism is.
  • This is not to say that mere discomfort with cultural difference is the same thing as racism (or xenophobia). Such discomfort is unavoidable: You’d have this sort of discomfort if you tried live in a foreign country for a while, and you’d be tempted by the same sorts of ideas about how stupid and mean people are for not doing things the way you’re used to.
  • strange customs become “stupid” because they reflect less of ourselves back to us than we’re used to.
  • That lack of reflection is felt not only as a distressing deprivation of social oxygen, but as an affront, a positive discourtesy.
  • The mature way to deal with such discomfort is to treat it as of a kind with social anxiety in general: people are strange, when you’re a stranger. Give it some time, and that changes. But it won’t change if you develop hefty rationalizations about the inferiority and dangerousness of others and treat these rationalizations as good reasons for cultural paranoia.
  • Americans seem to have difficulty engaging in the required reflective empathy, and imagining how they would feel if they knew that every time they walked into a public space a large number of a dominant racial majority looked at them with fear and loathing. They might, under such circumstances, have a bad day.
  •  
    From Nick Lalone in Buzz. "John Derbyshire has been fired from the National Review for an openly racist column on how white people should advise their children with respect to "blacks": for the most part, avoid them. Because on the whole, they are unintelligent, antisocial, hostile, and dangerous. Or as he puts it, avoid "concentrations of blacks" or places "swamped with blacks," and leave a place when "the number of blacks suddenly swells," and keep moving when "accosted by a strange black" in the street. The language is alarmingly dehumanizing: black people come in "swamps" and "concentrations" (and presumably also in hordes, swarms, and just plain gangs). And it's clearly meant to be a dismissal of the notion - much talked about recently in light of the Trayvon Martin shooting - that African Americans should be able to walk down the street without being shunned, much less attacked."
1 - 17 of 17
Showing 20 items per page