Skip to main content

Home/ Long Game/ Group items matching "data" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
anonymous

Eight Silly Data Things Marketing People Believe That Get Them Fired. - 1 views

  • It turns out that Marketers, especially Digital Marketers, make really silly mistakes when it comes to data. Big data. Small data. Any data.
  • two common themes
  • 1. Some absolutely did not use data to do their digital jobs.
  • ...74 more annotations...
  • 2. Many used some data, but they unfortunately used silly data strategies/metrics.
  • Silly not in their eyes, silly in my eyes.
  • A silly metric, I better define it :), is one that distracts you for focusing on business investments that lead to bottom-line impact.
    • anonymous
       
      Within the context of my current project, the bottom-line impact would be increased engagement (in the form of donations, clinical study participation, and blood/fluid donation to scientific research).
  • Eight data things that marketing people believe that get them fired…. 1. Real-time data is life changing. 2. All you need to do is fix the bounce rate. 3. Number of Likes represents social awesomeness. 4. # 1 Search Results Ranking = SEO Success. 5. REDUCE MY CPC! REDUCE MY CPC NOW!! 6. Page views. Give me more page views, more and more and more! 7. Impressions. Go, get me some impressions stat! 8. Demographics and psychographics. That is all I need! Don't care for intent!
  • 1. Real-time data is life changing.
  • A lot of people get fired for this. Sadly not right away, because it takes time to realize how spectacular of a waste of money getting to real-time data was.
    • anonymous
       
      This is some REALLY FUNNY SHIT to me. But I'm a nerd.
  • I want you to say: "I don't want real-time data, I want right-time data. Let's understand the speed of decision making in our company. If we make real-time decisions, let's get real time data. If we make decisions over two days, let's go with that data cycle. If it take ten days to make a decision to change bids on our PPC campaigns, let's go with that data cycle." Right-time.
  • Real-time data is very expensive.
  • It is also very expensive from a decision-making perspective
  • even in the best case scenario of the proverbial pigs flying, they'll obsess about tactical things.
    • anonymous
       
      I get this completely. We get hung up on the tactical and lose sight of the strategic.
  • So shoot for right-time data.
  • That is a cheaper systems/platform/data strategy.
  • (And remember even the most idiotic system in the world now gives you data that is a couple hours old with zero extra investment from you. So when you say real time you are really saying "Nope, two hours is not enough for me!").
    • anonymous
       
      THIS is probably the best argument for our using Google Analytics and Google Search to collect data instead of paying large costs to firms that will offer questionable results.
  • That is also a way to get people to sync the data analysis (not data puking, sorry I meant data reporting) with the speed at which the company actually makes decisions (data > analyst > manager > director > VP > question back to manager > yells at the analyst > back to director> VP = 6 days).
  • The phrase "real-time data analysis" is an oxymoron.
  • 2. All you need to do is fix the bounce rate.
  • The difference between a KPI and a metric is that the former has a direct line of sight to your bottom-line, while the latter is helpful in diagnosing tactical challenges.
  • Bounce rate is really useful for finding things you suck at.
  • Along the way you also learn how not to stink. Bounce rate goes from 70% to a manageable 30%. Takes three months.
  • Stop obsessing about bounce rate.
  • From the time people land on your site it might take another 12 – 25 pages for them to buy or submit a lead. Focus on all that stuff. The tough stuff. Then you'll make money.
  • Focus on the actual game. Focus on incredible behavior metrics like Pages/Visit, focus on the Visitor Flow report, obsess about Checkout Abandonment Rate, make love to Average Order Size.
  • 3. Number of Likes represents social awesomeness.
  • it does not take a very long time for your Senior Management to figure out how lame the Likes metric is and that it drives 1. Zero value on Facebook and 2. Zero squared economic value or cost savings to the business.
  • many spectacular reasons
  • Here's one… We are looking at two consumer product brands, the tiny company Innocent Drinks and the Goliath called Tide Detergent.
  • Even with 10x the number of Likes on Facebook the giant called Tide has 4x fewer people talking about their brand when compared to the David called Innocent.
  • As no less than three comments mention below, Innocent is 90% owned by Coca Cola. Fooled me!
  • In a massively large company they've carved out an identity uniquely their own. They refuse to be corrupted by Coca Cola's own Facebook strategy of constant self-pimping and product ads masquerading as "updates." As a result pound for pound Innocent's fan engagement on its page is multiple time better than Coca Cola's - even if the latter has many more likes.
  • 4. # 1 Search Results Ranking = SEO Success.
  • Not going to happen.
  • as all decent SEOs will tell you, is that search results are no longer standardized. Rather they are personalized. I might even say, hyper-personalized. Regardless of if you are logged in or not.
  • When I search for "avinash" on Google I might rank #1 in the search results because I'm logged into my Google account, the engine has my search history, my computer IP address, it also has searches by others in my vicinity, local stories right now, and so many other signals. But when you search for "avinash" your first search result might be a unicorn. Because the search engine has determined that the perfect search result for you for the keyword avinash is a unicorn.
    • anonymous
       
      This is crucial to understand. I will be sharing this, at length, with my boss. :)
  • Universal search for example means that personalized results will not only look for information from web pages, they also look for YouTube/Vimoe videos, social listings, images of course, and so on and so forth.
  • Then let's not forget that proportionaly there are very few head searches, your long tail searches will be huge.
  • Oh and remember that no one types a word or two, people use long phrases.
  • There are a ton more reasons obsessing about the rank of a handful of words on the search engine results page (SERP) is a very poor decision.
  • So check your keyword ranking if it pleases you.
  • But don't make it your KPI.
  • For purely SEO, you can use Crawl Rate/Depth, Inbound Links (just good ones) and growth (or lack there of) in your target key phrases as decent starting points.
  • You can graduate to looking at search traffic by site content or types of content you have (it's a great signal your SEO is working).
  • Measuring Visits and Conversions in aggregate first and segmented by keywords (or even key word clusters) will get you on the path to showing real impact.
  • That gives you short term acquisition quality, you can then move to long term quality by focusing on metrics like lifetime value.
  • 5. REDUCE MY CPC! REDUCE MY CPC NOW!!
  • You should judge the success of that showing up by measure if you made money! Did you earn any profit?
  • Friends don't let friends use CPC as a KPI. Unless said friends want the friend fired.
  • 6. Page views. Give me more page views, more and more and more!
  • Content consumption is a horrible metric. It incentivises sub optimal behavior in your employees/agencies.
  • If you are a news site, you can get millions of page views
  • And it will probably get you transient traffic.
  • And what about business impact from all these one night stands ?
  • If you are in the content only business (say my beloved New York Times) a better metric to focus on is Visitor Loyalty
  • If your are in the lead generation business and do the "OMG let's publish a infographic on dancing monkey tricks which will get us a billion page views, even though we have nothing to do with dancing or monkeys or tricks" thing, measure success on the number of leads received and not how "viral" the infographic went and how many reshares it got on Twitter.
    • anonymous
       
      In other words, use that odd-one-off to redirect attention to the source of that one-off. I'll have to ponder that given our different KPI needs (nonprofit, we don't sell anything).
  • Don't obsess about page views.
  • Then measure the metric closest to that. Hopefully some ideas above will help get you promoted.
  • 7. Impressions. Go, get me some impressions stat!
  • My hypothesis is that TV/Radio/Magazines have created this bad habit. We can measure so little, almost next to nothing, that we've brought our immensely shaky GRP metric from TV to digital. Here it's called impressions. Don't buy impressions.
  • Buy engagement. Define what it means first of course .
  • If you are willing to go to clicks, do one better and measure Visits. At least they showed up on your mobile/desktop site.
  • Now if you are a newbie, measure bounce rate. If you have a tiny amount of experience measure Visit Duration. If you are a pro, measure Revenue. If you are an Analysis Ninja, measure Profit.
  • Impressions suck. Profit rocks.
  • If the simple A/B (test/control) experiment demonstrates that delivering display banner ad impressions to the test group delivers increased revenue, buy impressions to your heart's content. I'll only recommend that you repeat the experiment once a quarter.
  • You can buy impressions if you can prove via a simple controlled experiment that when we show impressions we got more engagement/sales and when we don't show impressions we did not get more engagement/sales.
  • But if you won't do the experiment and you use the # of impressions as a measure of success
  • 8. Demographics and psychographics. That is all I need! Don't care for intent!
  • This is not a metric, this is more of a what data you'll use to target your advertising issue.
  • Our primary method of buying advertising and marketing is: "I would like to reach 90 year old grandmas that love knitting, what tv channel should I advertise on." Or they might say: "I would like to reach 18 to 24 year olds with college education who supported Barack Obama for president." And example of demographic and psychographic segments.
  • We use that on very thin ice data, we bought advertising. That was our lot in life.
  • Did you know 50% of of TV viewership is on networks that each have <1% share? Per industry.bnet.com. I dare you to imagine how difficult it is to measure who they are, and how to target them to pimp your shampoo, car, cement.
  • Intent beats demographics and psychographics. Always.
  • if you have advertising money to spend, first spend it all on advertising that provides you intent data.
  • Search has a ton of strong intent. It does not matter if you are a grandma or a 18 year old. If you are on Baidu and you search for the HTC One, you are expressing strong intent. Second, content consumption has intent built in. If I'm reading lots of articles about how to get pregnant, you could show me an ad related to that
  • The first intent is strong, the second one is weaker.
  • There is a lot of intent data on the web. That is our key strength.
  •  
    This is a really great read by Avinash Kaushik at Occam's Razor. Volunmuous highlights follow.
anonymous

Highest-Calorie Menu Item at McDonald's? Not a Burger - 0 views

  • Some chains, such as Panera Bread Co. PNRA +0.40% and Au Bon Pain, already post calories on their menus, but McDonald's is the largest chain and the first fast-food company to do so on a national level.
  • Americans now consume roughly a third of their calories from restaurants, up from less than a quarter in the 1970s, according to the U.S. Department of Agriculture. And people spend about half of their food budgets at restaurants now, compared to a third in the 1970s.
  • "If we see a similar effect from other chains you'd see about a 30-calorie per person per day decrease," said Margo Wootan, director of nutrition policy at the Center for Science in the Public Interest. "The thing about obesity is it's caused by a slow, steady creep in people's weight over decades. For most of us, we're gaining one to two pounds per year steadily over decades and end up being 30 to 50 pounds overweight. The obesity epidemic is explained by about 100 extra calories per person per day, so if we get a daily 30-calorie decrease from menu labeling, that's huge."
  • ...4 more annotations...
  • Shortly after Panera Bread posted calorie counts on its menu boards in April 2010, the company noticed that 20% of customers began ordering lower-calorie items.
  • A report published last year in the International Journal of Behavioral Nutrition and Physical Activity, which reviewed seven studies on the topic, found that "calorie labeling does not have the intended effect of decreasing calorie purchasing or consumption."
  • New regulations requiring operators of restaurants with 20 or more outlets to post calories on menus are expected to take effect by the end of next year.
  • Glenn Kikuchi, owner of 10 McDonald's franchises in Maryland, said he's already seen signs that the highlighted calorie counts are having an effect. "I see that a lot of the moms are looking at it, but also, curiously enough, the teenagers are looking at it, too," Mr. Kikuchi said.
  •  
    "McDonald's Corp. MCD +0.32% customers will have an easier time of it next week, when the burger giant's restaurant and drive-thru menu boards across the country will show that the Big Mac, at 550 calories, is 200 calories leaner than the other burger. But other choices won't be so clear-cut, like the Double Cheeseburger with 440 calories or the Southwest Salad with Crispy Chicken, which weighs in at 450. McDonald's highest-calorie item isn't a burger at all, but the 1,150-calorie Big breakfast with hotcakes and large biscuit. And the healthy-sounding 22-ounce mango pineapple smoothie matches the 350 calories in the grilled chicken sandwich."
anonymous

How people read online: Why you won't finish this article. - 1 views

  • For every 161 people who landed on this page, about 61 of you—38 percent—are already gone.
  • We’re at the point in the page where you have to scroll to see more. Of the 100 of you who didn’t bounce, five are never going to scroll.
  • You’re tweeting a link to this article already? You haven’t even read it yet! What if I go on to advocate something truly awful, like a constitutional amendment requiring that we all type two spaces after a period?
  • ...23 more annotations...
  • Only a small number of you are reading all the way through articles on the Web.
  • Schwartz’s data shows that readers can’t stay focused. The more I type, the more of you tune out. And it’s not just me. It’s not just Slate. It’s everywhere online. When people land on a story, they very rarely make it all the way down the page. A lot of people don’t even make it halfway.
  • Even more dispiriting is the relationship between scrolling and sharing. Schwartz’s data suggest that lots of people are tweeting out links to articles they haven’t fully read. If you see someone recommending a story online, you shouldn’t assume that he has read the thing he’s sharing.
  • OK, we’re a few hundred words into the story now. According to the data, for every 100 readers who didn’t bounce up at the top, there are about 50 who’ve stuck around. Only one-half!
  • Take a look at the following graph created by Schwartz, a histogram showing where people stopped scrolling in Slate articles.
  • A typical Web article is about 2000 pixels long.
  • There’s a spike at 0 percent—i.e., the very top pixel on the page—because 5 percent of readers never scrolled deeper than that spot.
  • Finally, the spike near the end is an anomaly caused by pages containing photos and videos—on those pages, people scroll through the whole page.)
  • Or look at John Dickerson’s fantastic article about the IRS scandal or something. If you only scrolled halfway through that amazing piece, you would have read just the first four paragraphs. Now, trust me when I say that beyond those four paragraphs, John made some really good points about whatever it is his article is about, some strong points that—without spoiling it for you—you really have to read to believe. But of course you didn’t read it because you got that IM and then you had to look at a video and then the phone rang …
  • do you know what you get on a typical Slate page if you never scroll? Bupkis.
  • Schwarz’s histogram for articles across lots of sites is in some ways more encouraging than the Slate data, but in other ways even sadder:
  • On these sites, the median scroll depth is slightly greater—most people get to 60 percent of the article rather than the 50 percent they reach on Slate pages. On the other hand, on these pages a higher share of people—10 percent—never scroll. In general, though, the story across the Web is similar to the story at Slate: Few people are making it to the end, and a surprisingly large number aren’t giving articles any chance at all.
  • Chartbeat can’t directly track when individual readers tweet out links, so it can’t definitively say that people are sharing stories before they’ve read the whole thing. But Chartbeat can look at the overall tweets to an article, and then compare that number to how many people scrolled through the article.
  • Here’s Schwartz’s analysis of the relationship between scrolling and sharing on Slate pages:
  • Courtesy of Chartbeat And here’s a similar look at the relationship between scrolling and sharing across sites monitored by Chartbeat: Courtesy of Chartbeat
  • There’s a very weak relationship between scroll depth and sharing. Both at Slate and across the Web, articles that get a lot of tweets don’t necessarily get read very deeply.
  • Articles that get read deeply aren’t necessarily generating a lot of tweets.  
  • Schwartz tells me that on a typical Slate page, only 25 percent of readers make it past the 1,600th pixel of the page, and we’re way beyond that now.
  • Sure, like every other writer on the Web, I want my articles to be widely read, which means I want you to Like and Tweet and email this piece to everyone you know. But if you had any inkling of doing that, you’d have done it already. You’d probably have done it just after reading the headline and seeing the picture at the top. Nothing I say at this point matters at all.
  • So, what the hey, here are a couple more graphs, after which I promise I’ll wrap things up for the handful of folks who are still left around here. (What losers you are! Don’t you have anything else to do?) This heatmap shows where readers spend most of their time on Slate pages:
  • Schwartz told me I should be very pleased with Slate’s map, which shows that a lot of people are moved to spend a significant amount of their time below the initial scroll window of an article page.
  • Since you usually have to scroll below the fold to see just about any part of an article, Slate’s below-the-fold engagement looks really great. But if articles started higher up on the page, it might not look as good. In other words: Ugh.
  • Maybe this is just our cultural lot: We live in the age of skimming. I want to finish the whole thing, I really do. I wish you would, too. Really—stop quitting! But who am I kidding. I’m busy. You’re busy. There’s always something else to read, watch, play, or eat. OK, this is where I’d come up with some clever ending. But who cares? You certainly don’t. Let’s just go with this: Kicker TK.
  •  
    "Schwartz's data shows that readers can't stay focused. The more I type, the more of you tune out. And it's not just me. It's not just Slate. It's everywhere online. When people land on a story, they very rarely make it all the way down the page. A lot of people don't even make it halfway. Even more dispiriting is the relationship between scrolling and sharing. Schwartz's data suggest that lots of people are tweeting out links to articles they haven't fully read. If you see someone recommending a story online, you shouldn't assume that he has read the thing he's sharing."
anonymous

Five tools to extract "locked" data in PDFs - 0 views

  • Remember, no converter is perfect. This is because PDFs can hold scanned information (that requires another kind of conversion, like OCR), complex tables (with columns or rows spanned multiple cells) or without graphic lines, in short, distinct patterns that hinder the correct formatting of the converted file. 
  • According to the journalist, the best way to do this is to randomly check the converted data to see if it's different from the original. And don't be fooled, there will almost always be a need to clean up the data when using an automatic conversion, especially for tables.  
  • 1. Cometdocs
  • ...4 more annotations...
  • 2. Zamzar
  • 4. PDFtoText
  • 5. Tabula
  • 3. Nitro PDF to Excel
  •  
    "Extracting data from PDFs for open use is not a simple task, as ProPublica reporter Jeremy B. Merrill, one of the contributors to the "Dollars for Docs" project, can attest. The Knight Center for Journalism in the Americas asked programmers and specialists in data journalism, including the ex-editor of the Guardian datablog, Simon Rogers, for their recommendations and identified some free tools to facilitate the conversation from PDFs to an open format, like CSV tables."
anonymous

For the Love of Money - 0 views

  • I’d learned about the importance of being rich from my dad. He was a modern-day Willy Loman, a salesman with huge dreams that never seemed to materialize. “Imagine what life will be like,” he’d say, “when I make a million dollars.” While he dreamed of selling a screenplay, in reality he sold kitchen cabinets. And not that well. We sometimes lived paycheck to paycheck off my mom’s nurse-practitioner salary.
  • In desperation, I called a counselor whom I had reluctantly seen a few times before and asked for help.She helped me see that I was using alcohol and drugs to blunt the powerlessness I felt as a kid and suggested I give them up. That began some of the hardest months of my life. Without the alcohol and drugs in my system, I felt like my chest had been cracked open, exposing my heart to air. The counselor said that my abuse of drugs and alcohol was a symptom of an underlying problem — a “spiritual malady,” she called it.
  • For the first time in my life, I didn’t have to check my balance before I withdrew money. But a week later, a trader who was only four years my senior got hired away by C.S.F.B. for $900,000. After my initial envious shock — his haul was 22 times the size of my bonus — I grew excited at how much money was available.
  • ...19 more annotations...
  • At 25, I could go to any restaurant in Manhattan — Per Se, Le Bernardin — just by picking up the phone and calling one of my brokers, who ingratiate themselves to traders by entertaining with unlimited expense accounts. I could be second row at the Knicks-Lakers game just by hinting to a broker I might be interested in going. The satisfaction wasn’t just about the money. It was about the power. Because of how smart and successful I was, it was someone else’s job to make me happy.
  • My counselor didn’t share my elation. She said I might be using money the same way I’d used drugs and alcohol — to make myself feel powerful — and that maybe it would benefit me to stop focusing on accumulating more and instead focus on healing my inner wound. “Inner wound”? I thought that was going a little far and went to work for a hedge fund.
  • I wanted a billion dollars. It’s staggering to think that in the course of five years, I’d gone from being thrilled at my first bonus — $40,000 — to being disappointed when, my second year at the hedge fund, I was paid “only” $1.5 million.
  • But in the end, it was actually my absurdly wealthy bosses who helped me see the limitations of unlimited wealth.
  • I was in a meeting with one of them, and a few other traders, and they were talking about the new hedge-fund regulations. Most everyone on Wall Street thought they were a bad idea. “But isn’t it better for the system as a whole?” I asked. The room went quiet, and my boss shot me a withering look. I remember his saying, “I don’t have the brain capacity to think about the system as a whole. All I’m concerned with is how this affects our company.”Continue reading the main story I felt as if I’d been punched in the gut. He was afraid of losing money, despite all that he had.
  • From that moment on, I started to see Wall Street with new eyes. I noticed the vitriol that traders directed at the government for limiting bonuses after the crash. I heard the fury in their voices at the mention of higher taxes. These traders despised anything or anyone that threatened their bonuses. Ever see what a drug addict is like when he’s used up his junk? He’ll do anything — walk 20 miles in the snow, rob a grandma — to get a fix. Wall Street was like that. In the months before bonuses were handed out, the trading floor started to feel like a neighborhood in “The Wire” when the heroin runs out.
  • I’d always looked enviously at the people who earned more than I did; now, for the first time, I was embarrassed for them, and for me. I made in a single year more than my mom made her whole life. I knew that wasn’t fair; that wasn’t right. Yes, I was sharp, good with numbers. I had marketable talents. But in the end I didn’t really do anything. I was a derivatives trader, and it occurred to me the world would hardly change at all if credit derivatives ceased to exist. Not so nurse practitioners. What had seemed normal now seemed deeply distorted.
  • Wealth addiction was described by the late sociologist and playwright Philip Slater in a 1980 book, but addiction researchers have paid the concept little attention. Like alcoholics driving drunk, wealth addiction imperils everyone.
  • Wealth addicts are, more than anybody, specifically responsible for the ever widening rift that is tearing apart our once great country. Wealth addicts are responsible for the vast and toxic disparity between the rich and the poor and the annihilation of the middle class. Only a wealth addict would feel justified in receiving $14 million in compensation — including an $8.5 million bonus — as the McDonald’s C.E.O., Don Thompson, did in 2012, while his company then published a brochure for its work force on how to survive on their low wages. Only a wealth addict would earn hundreds of millions as a hedge-fund manager, and then lobby to maintain a tax loophole that gave him a lower tax rate than his secretary.
  • DESPITE my realizations, it was incredibly difficult to leave. I was terrified of running out of money and of forgoing future bonuses.
  • The first year was really hard. I went through what I can only describe as withdrawal — waking up at nights panicked about running out of money, scouring the headlines to see which of my old co-workers had gotten promoted.
  • Over time it got easier — I started to realize that I had enough money, and if I needed to make more, I could. But my wealth addiction still hasn’t gone completely away. Sometimes I still buy lottery tickets.
  • In the three years since I left, I’ve married, spoken in jails and juvenile detention centers about getting sober, taught a writing class to girls in the foster system, and started a nonprofit called Groceryships to help poor families struggling with obesity and food addiction.
  • And as time passes, the distortion lessens. I see Wall Street’s mantra — “We’re smarter and work harder than everyone else, so we deserve all this money” — for what it is: the rationalization of addicts. From a distance I can see what I couldn’t see then — that Wall Street is a toxic culture that encourages the grandiosity of people who are desperately trying to feel powerful.
  • I was lucky. My experience with drugs and alcohol allowed me to recognize my pursuit of wealth as an addiction. The years of work I did with my counselor helped me heal the parts of myself that felt damaged and inadequate, so that I had enough of a core sense of self to walk away.
  • Dozens of different types of 12-step support groups — including Clutterers Anonymous and On-Line Gamers Anonymous — exist to help addicts of various types, yet there is no Wealth Addicts Anonymous. Why not? Because our culture supports and even lauds the addiction.
  • Look at the magazine covers in any newsstand, plastered with the faces of celebrities and C.E.O.'s; the superrich are our cultural gods. I hope we all confront our part in enabling wealth addicts to exert so much influence over our country.
  • I recently got an email from a hedge-fund trader who said that though he was making millions every year, he felt trapped and empty, but couldn’t summon the courage to leave. I believe there are others out there.
  • Maybe we can form a group and confront our addiction together. And if you identify with what I’ve written, but are reticent to leave, then take a small step in the right direction. Let’s create a fund, where everyone agrees to put, say, 25 percent of their annual bonuses into it, and we’ll use that to help some of the people who actually need the money that we’ve been so rabidly chasing. Together, maybe we can make a real contribution to the world.
  •  
    "IN my last year on Wall Street my bonus was $3.6 million - and I was angry because it wasn't big enough. I was 30 years old, had no children to raise, no debts to pay, no philanthropic goal in mind. I wanted more money for exactly the same reason an alcoholic needs another drink: I was addicted."
anonymous

Geopolitical Intelligence, Political Journalism and 'Wants' vs. 'Needs' - 2 views

  • At Stratfor, the case is frequently the opposite: Our readers typically are expert in the topics we study and write about, and our task is to provide the already well-informed with further insights. But the question is larger than that.
  • We co-exist in this ecosystem, but geopolitical intelligence is scarcely part of the journalistic flora and fauna. Our uniqueness creates unique challenges
  • Instead, let's go to the core dynamic of the media in our age and work back outward through the various layers to what we do in the same virtual space, namely, intelligence.
  • ...17 more annotations...
  • You could get the same information with a week's sorting of SEC filings. But instead, you have just circumvented that laborious process by going straight to just one of the "meta-narratives" that form the superstructure of journalism.
  • Meta-Narratives at Journalism's Core Welcome to the news media's inner core.
  • For the fundamental truth of news reporting is that it is constructed atop pre-existing narratives comprising a subject the reader already knows or expects, a description using familiar symbolism often of a moral nature, and a narrative that builds through implicit metaphor from the stories already embedded in our culture and collective consciousness.
  • The currency of language really is the collection of what might be called the "meta-stories."
  • There's nothing wrong with this. For the art of storytelling -- journalism, that is -- is essentially unchanged from the tale-telling of Neolithic shamans millennia ago up through and including today's New York Times. Cultural anthropologists will explain that our brains are wired for this. So be it.
  • We at Stratfor may not "sync up." Journalists certainly do.
  • Meta-Narratives Meet Meta-Data There is nothing new in this; it is a process almost as old as the printing press itself. But where it gets particularly new and interesting is with my penultimate layer of difference, the place where meta-narratives meet meta-Data.
  • "Meta-data," as the technologists call it, is more simply understood as "data about data."
  • Where the online battle for eyeballs becomes truly epic, however, (Google "the definition of epic" for yet another storyteller's meta-story) is when these series of tags are organized into a form of meta-data called a "taxonomy."
  • And thus we arrive at the outermost layer of the media's skin in our emerging and interconnected age. This invisible skin over it all comes in the form of a new term of art, "search engine optimization," or in the trade just "SEO."
  • With journalists already predisposed by centuries of convention to converge on stories knitted from a common canon, the marriage of meta-narrative and meta-data simply accelerates to the speed of light the calibration of topic and theme.
  • If a bit simplified, these layers add up to become the connective tissue in a media-centric and media-driven age. Which leads me back to the original question of why Stratfor so often "fails to sync up with the media."
  • For by the doctrines of the Internet's new commercial religion, a move disrupting the click stream was -- and is -- pure heresy. But our readers still need to know about Colombia, just as they need our unique perspectives on Syria.
  • Every forecast and article we do is essentially a lab experiment, in which we put the claims of politicians, the reports on unemployment statistics, the significance of a raid or a bombing to the test of geopolitics.
  • We spend much more time studying the constraints on political actors -- what they simply cannot do economically, militarily or geographically -- than we do examining what they claim they will do.
  • The key characteristic to ponder here is that such methodology -- intelligence, in this case -- seeks to enable the acquisition of knowledge by allowing reality to speak for itself. Journalism, however, creates a reality atop many random assumptions through the means described. It is not a plot, a liberal conspiracy or a secret conservative agenda at work, as so many media critics will charge. It is simply the way the media ecosystem functions. 
  • Journalism, in our age more than ever before, tells you what you want to know. Stratfor tells you what you need to know. 
  •  
    "Just last week, the question came again. It is a common one, sometimes from a former colleague in newspaperdom, sometimes from a current colleague here at Stratfor and often from a reader. It is always to the effect of, "Why is Stratfor so often out of sync with the news media?" All of us at Stratfor encounter questions regarding the difference between geopolitical intelligence and political journalism. One useful reply to ponder is that in conventional journalism, the person providing information is presumed to know more about the subject matter than the reader. At Stratfor, the case is frequently the opposite: Our readers typically are expert in the topics we study and write about, and our task is to provide the already well-informed with further insights. But the question is larger than that."
  •  
    Excuse me while I guffaw. Stratfor is not the first to claim that they're the only ones not swayed by financial factors. Stratfor has its own metanarratives (especially geographic determinism) as much as anyone else does.
anonymous

America's Epidemic of Enlightened Racism - 0 views

  • the summary dismissal of the column – without substantive rebuttals to claims that are so racist as to seem to be beneath public discourse – means that he can play the role of victim of political correctness gone amok.
  • Derbyshire claims that his ideas are backed up by “methodological inquiries in the human sciences,” and includes links to sites that provide all the negative sociological data about black people you’d ever need to justify your fear of them, including the claim that “blacks are seven times more likely than people of other races to commit murder, and eight times more likely to commit robbery.”
  • So he can cast himself as someone who had the courage to tell it like it is – with all the sociological data backing him up – only to be punished for this by the reactionary hypocrites who control the public discourse.
  • ...25 more annotations...
  • Once again, he can tell himself, those quick to cry “racism” have prevented an honest conversation about race.
  • If Derbyshire were a lone crank, none of it would matter much. But he’s not.
  • they see them selves as advocates of a sort of enlightened racism that doesn’t shrink from calling a spade a spade but isn’t inherently unjust.
  • Enlightened racism is meant to escape accusations of being racist in the pejorative sense via two avenues: the first is the appeal to data I have just described. The second is a loophole to the effect that exceptions are to be made for individuals.
  • They could care less about skin color, they say; it really is the content of people’s characters that concerns them, and that content really does suffer more in blacks than whites.
  • Because they are so widespread and aim to restore the respectability of interracial contempt, these attempts at an enlightened racism deserve a rebuttal. Especially in light of the fact that those who hold such views often see themselves as the champions of reasons over sentiment, when in fact their views are deeply irrational.
  • First, a history of slavery, segregation, and (yes) racism, means that African American communities suffer from some social problems at higher rates than whites.
  • But that doesn’t change the fact that the majority of black people – statistically, and not just based on politically correct fuzzy thinking – are employed, not on welfare, have no criminal record, and so on and so forth.
  • So the kind of thinking that enlightened racists see as their way of staring a hard reality right in the face turns out to be just a silly rationalization using weak statistical differences.
  • In other words, one’s chances of being a victim of violent crime is already so low, that even accounting  for higher crime rates among African Americans, one’s chance of being a victim of violent crime by an African American remains very low.
  • The argument that Derbyshire and those like him make is that we are justified in treating an entire population as a threat – in essentially shunning them in the most degrading way – because one’s chances of being harmed by any given member of that population, while very low, is not quite as low as one’s chances of being harmed by the general population.
  • It’s an argument that starts out with sociological data and quickly collapses to reveal the obvious underlying motivation: unenlightened racism of the coarsest variety.
  • Second, there is the issue of character: because this, after all, is what really motivates these attempts at establishing an enlightened racism that gives individuals the benefit of the doubt while acknowledging the truth about general cultural differences.
  • I think it suffices to respond in the following way: people tend to mistake their discomfort with the cultural differences of a group with that group’s inferiority. (They also tend to conflate their political and economic advantages with psychological superiority).
  • If they respond with sociological data about education and birth rates and all the rest, we only have to respond that like crime rates, they’re exactly the sort of consequences one would expect from a history of oppression and even then fail to justify racist stereotypes.
  • The fact is, that where we pick a white person or black person at random, the same truths hold: they very likely have a high school diploma, and probably do not have a bachelor’s degree. They’re probably employed and not on welfare. They’ve probably never been to prison, and they almost certainly are not going to harm you. These are the broad statistical truths that simply do not vary enough between races to justify the usual stereotypes.
  • So here is the hard truth that advocates of enlightened racism need to face: their sociological data and ideas about black character, intelligence and morality are post-hoc rationalizations of their discomfort with average cultural differences between whites and blacks.
  • The fact that they have black friends and political heroes, or give individuals the benefit of the doubt as long as they are “well-socialized” and “intelligent” just means that they can suppress that discomfort if the cultural differences are themselves lessened to a tolerable degree.
  • And so they need to disabuse themselves of the idea that true, unenlightened racism is a term very narrowly defined: that it requires a personal hatred of individual black people based on their skin color despite evidence of redeeming personal qualities.
  • What they think of as redeeming personal qualities are just qualities that tend to make them less uncomfortable. But the hatred of black culture and post-hoc rationalizations of this hatred using sociological data are just what racism is.
  • This is not to say that mere discomfort with cultural difference is the same thing as racism (or xenophobia). Such discomfort is unavoidable: You’d have this sort of discomfort if you tried live in a foreign country for a while, and you’d be tempted by the same sorts of ideas about how stupid and mean people are for not doing things the way you’re used to.
  • strange customs become “stupid” because they reflect less of ourselves back to us than we’re used to.
  • That lack of reflection is felt not only as a distressing deprivation of social oxygen, but as an affront, a positive discourtesy.
  • The mature way to deal with such discomfort is to treat it as of a kind with social anxiety in general: people are strange, when you’re a stranger. Give it some time, and that changes. But it won’t change if you develop hefty rationalizations about the inferiority and dangerousness of others and treat these rationalizations as good reasons for cultural paranoia.
  • Americans seem to have difficulty engaging in the required reflective empathy, and imagining how they would feel if they knew that every time they walked into a public space a large number of a dominant racial majority looked at them with fear and loathing. They might, under such circumstances, have a bad day.
  •  
    From Nick Lalone in Buzz. "John Derbyshire has been fired from the National Review for an openly racist column on how white people should advise their children with respect to "blacks": for the most part, avoid them. Because on the whole, they are unintelligent, antisocial, hostile, and dangerous. Or as he puts it, avoid "concentrations of blacks" or places "swamped with blacks," and leave a place when "the number of blacks suddenly swells," and keep moving when "accosted by a strange black" in the street. The language is alarmingly dehumanizing: black people come in "swamps" and "concentrations" (and presumably also in hordes, swarms, and just plain gangs). And it's clearly meant to be a dismissal of the notion - much talked about recently in light of the Trayvon Martin shooting - that African Americans should be able to walk down the street without being shunned, much less attacked."
anonymous

How Bayes' Rule Can Make You A Better Thinker - 1 views

  • To find out more about this topic, we spoke to mathematician Spencer Greenberg, co-founder of Rebellion Research and a contributing member of AskAMathematician where he answers questions on math and physics. He has also created a free Bayesian thinking module that's available online.
  • Bayes’s Rule is a theorem in probability theory that answers the question, "When you encounter new information, how much should it change your confidence in a belief?" It’s essentially about making decisions under uncertainty, and how we should update or revise our theories as new evidence emerges. It can also be used to help us reach decisions in those circumstances when very few observations or pieces of evidence are available. And it can also be used to help us avoid common mistakes and fallacies in our thinking.
  • The key to Bayesianism is in understanding the power of probabilistic reasoning. But unlike games of chance, in which there’s no ambiguity and everyone agrees on what’s going on (like the roll of die), Bayesians use probability to express their degree of belief about something.
  • ...11 more annotations...
  • When it comes to the confidence we have in our beliefs — what can be expressed in terms of probability — we can’t just make up any number we want. There’s only one consistent way to handle those degrees in beliefs.
  • In the strictest sense, of course, this requires a bit of mathematical knowledge. But Greenberg says there’s still an easy way to use this principle in daily life — and one that can be converted to plain English.
  • Greenberg says it’s the question of evidence which he should apply, which goes like this:: Assuming that our hypothesis is true, how much more plausible, or likely, is the evidence compared to the hypothesis if it was not true?
  • “It’s important to note that the idea here is not to answer the question in a precise way — like saying that it’s 3.2 times more likely — rather, it’s to get a rough sense. Is it a high number, a modest number, or a small number?”
  • To make Bayes practical, we have to start with the belief of how likely something is. Then we need to ask the question of evidence, and whether or not we should increase the confidence in our beliefs by a lot, a little, and so on.
  • “Much of the time people will automatically try to shoot down evidence, but you can get evidence for things that are not true. Just because you have evidence doesn’t mean you should change your mind. But it does mean that you should change your degree of belief.”
  • Greenberg also describes Representativeness Heuristic in which people tend to look at how similar things are.
  • Greenberg also says that we should shy away from phrases like, “I believe,” or “I don’t believe.” “That’s the wrong way to frame it,” he says. “We should think about things in terms of how probable they are. You almost never have anything close to perfect certainty.”
  • “Let’s say you believe that your nutrition supplement works,” he told us, “Then you get a small amount of evidence against it working, and you completely write that evidence off because you say, ‘well, I still believe it works because it’s just a small amount of evidence.’ But then you get more evidence that it doesn’t work. If you were an ideal reasoner, you’d see that accumulation of evidence, and every time you get that evidence, you should believe less and less that the nutritional supplements are actually working.” Eventually, says Greenberg, you end up tipping things so that you no longer believe. But instead, we end up never changing our mind.
  • “You should never say that you have absolute certainty, because it closes the door to being able to revise your certainty in light of new information,” Greenberg told io9. “And the same thing can be said for having zero percent certainty about something happening. If you’re at 100% certainty, then the correct way of updating is to stay at 100% forever, and no amount of evidence can tip you.”
  • Lastly, he also says that probabilities can depend on the observer — what is a kind of probability relativity. We all have access to different information, so different people should assign different rates of probability to different things based on different sets of evidence.
  •  
    "Having a strong opinion about an issue can make it hard to take in new information about it, or to consider other options when they're presented. Thankfully, there's an old rule that can help us avoid this problem - and even help us make good decisions when we're uncertain. Here's how Bayesian Reasoning works, and why it can make you a better thinker."
anonymous

Social Is Not A Destination - 0 views

  • For Facebook, your social network sits on the Facebook site and most of the experience is consumed through the Facebook application; for Google+, social is about a type of glue that ties its services together across search, maps, photos, and more.
  • Google+ is now behind your email (it’s in Gmail), your chats (it powers Google hangouts), your calendar (in Google Calendar), your documents (it’s in Google Drive), your pictures (stealing a big functional element of Facebook by offering it in an integrated fashion with Android devices) and your videos (youTube channels are now managed via Google+); It’s there when you comment on a blogspot site or review a business or restaurant on Zagat and Google map.
  • Google+ serves as glue instead of destination, which means that any comparison between Google+ and Facebook is similar to comparing people who love New York with the Empire Empire State Building: One is a group of people, who can do different things based on some invisible association (love of New York) while the other is a destination where those people or other people can gather for a brief period of time before they move on to some other place.
  • ...4 more annotations...
  • Google+’s approach is much more boring but also much more resistant to long-term changes because it focuses on links between people instead of being a destination.
  • links are also more resilient than destinations: once a series of links has been established, it is harder to undo than trying to switch from one destination to another.
  • While companies like Facebook, Twitter and Yahoo (through its more recent acquisitions, including Tumblr) have been busy building destination sites on which they can display advertising, Google has been using destinations as a driver for what advertising to display next.
  • This kind of inference based on previous patterns sits at the core of what Google+ is about and, interestingly, a Google alumni has founded a company that would fit nicely in that vision: Foursquare, with its recent switch to search seems to be the perfect database of location signals for Google to pick up.
  •  
    "Google+ serves as glue instead of destination, which means that any comparison between Google+ and Facebook is similar to comparing people who love New York with the Empire State Building: One is a group of people, who can do different things based on some invisible association (love of New York) while the other is a destination where those people or other people can gather for a brief period of time before they move on to some other place."
anonymous

Researchers Finally Replicated Reinhart-Rogoff, and There Are Serious Problems. - 0 views

  • Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact.
  • This has been one of the most cited stats in the public debate during the Great Recession.
  • In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results.
  • ...16 more annotations...
  • After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadhseet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.
  • They find that three main issues stand out.
  • First, Reinhart and Rogoff selectively exclude years of high debt and average growth.
  • Second, they use a debatable method to weight the countries.
  • Third, there also appears to be a coding error that excludes high-debt and average-growth countries.
  • All three bias in favor of their result, and without them you don't get their controversial result.
  • Selective Exclusions. Reinhart-Rogoff use 1946-2009 as their period, with the main difference among countries being their starting year.
  • The paper didn't disclose which years they excluded or why.
  • Unconventional Weighting. Reinhart-Rogoff divides country years into debt-to-GDP buckets. They then take the average real growth for each country within the buckets.
  • this weighting significantly reduces the average; if you weight by the number of years you find a higher growth rate above 90 percent.
  • Coding Error. As Herndon-Ash-Pollin puts it: "A coding error in the RR working spreadsheet entirely excludes five countries, Australia, Austria, Belgium, Canada, and Denmark, from the analysis.
  • Being a bit of a doubting Thomas on this coding error, I wouldn't believe unless I touched the digital Excel wound myself. One of the authors was able to show me that, and here it is. You can see the Excel blue-box for formulas missing some data:
  • If this error turns out to be an actual mistake Reinhart-Rogoff made, well, all I can hope is that future historians note that one of the core empirical points providing the intellectual foundation for the global move to austerity in the early 2010s was based on someone accidentally not updating a row formula in Excel.
  • So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim]." [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.
  • This is also good evidence for why you should release your data online, so it can be properly vetted.
  • But beyond that, looking through the data and how much it can collapse because of this or that assumption, it becomes quite clear that there's no magic number out there. The debt needs to be thought of as a response to the contingent circumstances we find ourselves in, with mass unemployment, a Federal Reserve desperately trying to gain traction at the zero lower bound, and a gap between what we could be producing and what we are. The past guides us, but so far it has failed to provide evidence of an emergency threshold. In fact, it tells us that a larger deficit right now would help us greatly.
  •  
    "In 2010, economists Carmen Reinhart and Kenneth Rogoff released a paper, "Growth in a Time of Debt." Their "main result is that...median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower." Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact."
anonymous

We Are All Hayekians Now: The Internet Generation and Knowledge Problems - 1 views

  • Primarily in his The Use of Knowledge in Society but also in his other contributions to the socialist calculation debate, Hayek crafted a brilliant statement of a perennial problem.
  • In the world of human endeavor, we have two types of problems: economic and technological.
  • Technological problems involve effectively allocating given resources to accomplish a single valuable goal.
  • ...21 more annotations...
  • The choice to build the bridge is a choice between this bridge or that skyscraper as well as any other alternative use of those resources. Each alternative use would have different benefits (and unseen costs).
  • This is not a mere question of engineering the strongest or even the most cost-effective structure to get across the Hudson, this is a question of what is the strongest or most cost-effective possible future version of New York City.
  • “We are building the world’s 20th search engine at a time when most of the others have been abandoned as being commoditized money losers. We’ll strip out all of the ad-supported news and portal features so you won’t be distracted from using the free search stuff.”
  • But, of course, Google survived, prospered, and continues towards its apparent goal of eating the entire internet (while also making cars drive themselves, putting cameras on everyone’s heads, and generally making Steve Ballmer very very angry). So, why did Google win? The answer is, perhaps surprisingly, in Hayek’s theory.
    • anonymous
       
      Very embarassing videos.
  • “Our goal always has been to index all the world’s data.” Talk about anemic goals, come on Google, show some ambition!
  • So, is this one of Hayek’s technical problems or is this an economic one?
  • Our gut might first tell us that it is technical.
  • Sure, all this data is now hanging out in one place for free, but to make a useful index you need to determine how much people value different data. We need data about the data.
  • In Soviet Russia, failed attempts at arranging resources destroyed the information about the resources. The free market is the best way to figure out how individual people value individual resources. When left to trade voluntarily, people reveal their preferences with their willingness to pay. By arranging resources through coercion you’ve blinded yourself to the emergent value of the resources because you’ve forbidden voluntary arrangement in the economy.
  • This is different on the internet.
  • The data resources are not rivalrous
  • Search used to be really bad. Why? Because search companies were using either (a) content-producer willingness to pay for indexing, (b) mere keyword search or (c) some combination of editorial centralized decision-making to organize lists of sites.
  • These methods only work if you think that the best site about ducks is either (a) the site that has the most money to pay Altavista for prime “duck” listing, (b) the site that has the most “ducks” in its text, or (c) the site that was most appealing to your employees tasked with finding duck sites.
  • If 999 other websites linked to one website about ducks, you can bet that most people think that this site is better at explaining ducks than a site with only one link to it (even if that link was horse-sized).
  • So Google uses the decentralized Hayekian knowledge of the masses to function. Why does this mean we’re all Hayekians?
  • All of the questions of organizing activity on the internet are solved (when they are, in fact, solved successfully) using Hayekian decentralized knowledge.
  • Amazon customer reviews are how we find good products. Ebay feedback is how we find good individual sellers. And, moreover, whole brick and mortar services are moving to a crowd-sourced model, with sites like AirBnB for lodging and RelayRides for car rental.
  • the giant firms of tomorrow will be those that empower people to freely share their knowledge and resources in a vibrant marketplace.
  • Today, the central challenge for a firm is not to develop careful internal management but rather the non-trivial task of building marketplaces and forums to encourage decentralized knowledge production and cooperation.
  • Our generation already understands this on a gut level. We Google everything.  We defend freedom on the internet as if it was our own personal real-world liberty at stake. We mock the antiquated central planners of the early web, looking at you AOL, Prodigy, for their ineffectual obviousness and denial of crowd-sourced knowledge.
  • We all know where the best economic knowledge lies, in the many and never the few.
  •  
    "We are all Hayekians now. Specifically, the "we all" is not quite everyone. The "all" to which I'm referring is people of the internet-people who've grown up with the net and use it for a majority of their day-to-day activities. And, the "Hayekian" to which I'm referring is not his theories on capital, or the rule of law, but, specifically his vision of knowledge."
anonymous

Information Consumerism: The Price of Hypocrisy - 0 views

  • let us not pass over America’s surveillance addiction in silence. It is real; it has consequences; and the world would do itself a service by sending America to a Big Data rehab. But there’s more to learn from the Snowden affair.
  • It has also busted a number of myths that are only peripherally related to surveillance: myths about the supposed benefits of decentralized and commercially-operated digital infrastructure, about the current state of technologically-mediated geopolitics, about the existence of a separate realm known as “cyberspace.”
  • First of all, many Europeans are finally grasping, to their great dismay, that the word “cloud” in “cloud computing” is just a euphemism for “some dark bunker in Idaho or Utah.”
  • ...50 more annotations...
  • Second, ideas that once looked silly suddenly look wise. Just a few months ago, it was customary to make fun of Iranians, Russians and Chinese who, with their automatic distrust of all things American, spoke the bizarre language of “information sovereignty.”
  • Look who’s laughing now: Iran’s national email system launched a few weeks ago. Granted the Iranians want their own national email system, in part, so that they can shut it down during protests and spy on their own people AT other times. Still, they got the geopolitics exactly right: over-reliance on foreign communications infrastructure is no way to boost one’s sovereignty. If you wouldn’t want another nation to run your postal system, why surrender control over electronic communications?
    • anonymous
       
      This could have been written by StratFor.
  • Third, the sense of unconditional victory that civil society in both Europe and America felt over the defeat of the Total Information Awareness program – a much earlier effort to establish comprehensive surveillance – was premature.
  • The problem with Total Information Awareness was that it was too big, too flashy, too dependent on government bureaucracy. What we got instead, a decade later, is a much nimbler, leaner, more decentralized system, run by the private sector and enabled by a social contract between Silicon Valley and Washington
  • This is today’s America in full splendor: what cannot be accomplished through controversial legislation will be accomplished through privatization, only with far less oversight and public control.
  • From privately-run healthcare providers to privately-run prisons to privately-run militias dispatched to war zones, this is the public-private partnership model on which much of American infrastructure operates these days.
  • Communications is no exception. Decentralization is liberating only if there’s no powerful actor that can rip off the benefits after the network has been put in place.
  • Fourth, the idea that digitization has ushered in a new world, where the good old rules of realpolitik no longer apply, has proved to be bunk. There’s no separate realm that gives rise to a new brand of “digital” power; it’s one world, one power, with America at the helm.
    • anonymous
       
      THIS right here, is crucial.
  • The sheer naivete of statements like this – predicated on the assumption that somehow one can “live” online the way one lives in the physical world and that virtual politics works on a logic different from regular politics – is illustrated by the sad case of Edward Snowden, a man with a noble mission and awful trip-planning skills.
  • Fifth, the once powerful myth that there exists a separate, virtual space where one can have more privacy and independence from social and political institutions is dead.
  • Microsoft’s general counsel wrote that “looking forward, as Internet-based voice and video communications increase, it is clear that governments will have an interest in using (or establishing) legal powers to secure access to this kind of content to investigate crimes or tackle terrorism. We therefore assume that all calls, whether over the Internet or by fixed line or mobile phone, will offer similar levels of privacy and security.”
  • Read this again: here’s a senior Microsoft executive arguing that making new forms of communication less secure is inevitable – and probably a good thing.
  • Convergence did happen – we weren’t fooled! – but, miraculously, technologies converged on the least secure and most wiretap-friendly option available.
  • This has disastrous implications for anyone living in dictatorships. Once Microsoft and its peers start building software that is insecure by design, it turbocharges the already comprehensive spying schemes of authoritarian governments. What neither NSA nor elected officials seem to grasp is that, on matters of digital infrastructure, domestic policy is also foreign policy; it’s futile to address them in isolation.
  • This brings us to the most problematic consequence of Snowden’s revelations. As bad as the situation is for Europeans, it’s the users in authoritarian states who will suffer the most.
  • And not from American surveillance, but from domestic censorship. How so? The already mentioned push towards “information sovereignty” by Russia, China or Iran would involve much more than protecting their citizens from American surveillance. It would also trigger an aggressive push to shift public communication among these citizens – which, to a large extent, still happens on Facebook and Twitter – to domestic equivalents of such services.
  • It’s probably not a coincidence that LiveJournal, Russia’s favorite platform, suddenly had maintenance issues – and was thus unavailable for general use – at the very same time that a Russian court announced its verdict to the popular blogger-activist Alexei Navalny.
  • For all the concerns about Americanization and surveillance, US-based services like Facebook or Twitter still offer better protection for freedom of expression than their Russian, Chinese or Iranian counterparts.
  • This is the real tragedy of America’s “Internet freedom agenda”: it’s going to be the dissidents in China and Iran who will pay for the hypocrisy that drove it from the very beginning.
  • On matters of “Internet freedom” – democracy promotion rebranded under a sexier name – America enjoyed some legitimacy as it claimed that it didn’t engage in the kinds of surveillance that it itself condemned in China or Iran. Likewise, on matters of cyberattacks, it could go after China’s cyber-espionage or Iran’s cyber-attacks because it assured the world that it engaged in neither.
  • Both statements were demonstrably false but lack of specific evidence has allowed America to buy some time and influence.
  • What is to be done? Let’s start with surveillance. So far, most European politicians have reached for the low-hanging fruit – law – thinking that if only they can better regulate American companies – for example, by forcing them to disclose how much data and when they share with NSA – this problem will go away.
  • This is a rather short-sighted, naïve view that reduces a gigantic philosophical problem – the future of privacy – to seemingly manageable size of data retention directives.
  • Our current predicaments start at the level of ideology, not bad policies or their poor implementation.
  • As our gadgets and previously analog objects become “smart,” this Gmail model will spread everywhere. One set of business models will supply us with gadgets and objects that will either be free or be priced at a fraction of their real cost.
  • In other words, you get your smart toothbrush for free – but, in exchange, you allow it to collect data on how you use the toothbrush.
  • If this is, indeed, the future that we are heading towards, it’s obvious that laws won’t be of much help, as citizens would voluntarily opt for such transactions – the way we already opt for free (but monitorable) email and cheaper (but advertising-funded) ereaders.
  • In short, what is now collected through subpoenas and court orders could be collected entirely through commercial transactions alone.
  • Policymakers who think that laws can stop this commodificaton of information are deluding themselves. Such commodification is not happening against the wishes of ordinary citizens but because this is what ordinary citizen-consumer want.
  • Look no further than Google’s email and Amazon’s Kindle to see that no one is forced to use them: people do it willingly. Forget laws: it’s only through political activism and a robust intellectual critique of the very ideology of “information consumerism” that underpins such aspirations that we would be able to avert the inevitable disaster.
  • Where could such critique begin? Consider what might, initially, seem like a bizarre parallel: climate change.
  • For much of the 20th century, we assumed that our energy use was priced correctly and that it existed solely in the consumer paradigm of “I can use as much energy as I can pay for.” Under that paradigm, there was no ethics attached to our energy use: market logic has replaced morality – which is precisely what has enabled fast rates of economic growth and the proliferation of consumer devices that have made our households electronic paradises free from tiresome household work.
  • But as we have discovered in the last decade, such thinking rested on a powerful illusion that our energy use was priced correctly – that we in fact paid our fair share.
  • But of course we had never priced our energy use correctly because we never factored in the possibility that life on Earth might end even if we balance all of our financial statements.
  • The point is that, partly due to successful campaigns by the environmental movement, a set of purely rational, market-based decisions have suddenly acquired political latency, which has given us differently designed cars, lights that go off if no one is in the room, and so forth.
  • It has also produced citizens who – at least in theory – are encouraged to think of implications that extend far beyond the ability to pay their electricity bill.
  • Right now, your decision to buy a smart toothbrush with a sensor in it – and then to sell the data that it generates – is presented to us as just a purely commercial decision that affects no one but us.
  • But this is so only because we cannot imagine an information disaster as easily as we can imagine an environmental disaster.
  • there are profound political and moral consequences to information consumerism– and they are comparable to energy consumerism in scope and importance.
  • We should do our best to suspend the seeming economic normalcy of information sharing. An attitude of “just business!” will no longer suffice. Information sharing might have a vibrant market around it but it has no ethical framework to back it up.
  • NSA surveillance, Big Brother, Prism: all of this is important stuff. But it’s as important to focus on the bigger picture -- and in that bigger picture, what must be subjected to scrutiny is information consumerism itself – and not just the parts of the military-industrial complex responsible for surveillance.
  • As long as we have no good explanation as to why a piece of data shouldn’t be on the market, we should forget about protecting it from the NSA, for, even with tighter regulation, intelligence agencies would simply buy – on the open market – what today they secretly get from programs like Prism.
  • Some might say: If only we could have a digital party modeled on the Green Party but for all things digital. A greater mistake is harder to come by.
  • What we need is the mainstreaming of “digital” topics – not their ghettoization in the hands and agendas of the Pirate Parties or whoever will come to succeed them. We can no longer treat the “Internet” as just another domain – like, say, “the economy” or the “environment” – and hope that we can develop a set of competencies around it.
  • Forget an ambiguous goal like “Internet freedom” – it’s an illusion and it’s not worth pursuing. What we must focus on is creating environments where actual freedom can still be nurtured and preserved.
  • The Pirates’s tragic miscalculation was trying to do too much: they wanted to change both the process of politics and its content. That project was so ambitious that it was doomed to failure from the very beginning.
  • whatever reforms the Pirates have been advancing did not seem to stem from some long critical reflections of the pitfalls of the current political system but, rather, from their belief that the political system, incompatible with the most successful digital platforms from Wikipedia to Facebook, must be reshaped in their image. This was – and is – nonsense.
  • A parliament is, in fact, different from Wikipedia – but the success of the latter tells us absolutely nothing about the viability of the Wikipedia model as a template for remodeling our political institutions
  • In as much as the Snowden affair has forced us to confront these issues, it’s been a good thing for democracy. Let’s face it: most of us would rather not think about the ethical implications of smart toothbrushes or the hypocrisy involved in Western rhetoric towards Iran or the genuflection that more and more European leaders show in front of Silicon Valley and its awful, brain-damaging language, the Siliconese.
  • The least we can do is to acknowledge that the crisis is much deeper and that it stems from intellectual causes as much as from legal ones. Information consumerism, like its older sibling energy consumerism, is a much more dangerous threat to democracy than the NSA.
  •  
    "The problem with the sick, obsessive superpower revealed to us by Edward Snowden is that it cannot bring itself to utter the one line it absolutely must utter before it can move on: "My name is America and I'm a dataholic.""
anonymous

Products of Slavery: Revealing Child and Forced Labor in Supply Chains - 0 views

  •  
    "Products of Slavery [productsofslavery.org] is an online visualization that takes the data (PDF) from a report of the U.S. Department of Labor on child and forced labor worldwide, and makes it open and accessible. Investigations show that more than 122 different products are made using child or forced labor in more than 58 countries. The website is part of Anti-Slavery International's ongoing campaign, as it aims to work with businesses to eradicate slavery in private sector supply chains. The interactive map shows the types of products that are produced in specific countries using child labor, forced labor or both. The quantitative data is accompanied with is called here as "facts": moving quotes that illustrate the meaning and story behind this data."
anonymous

How the internet is making us poor - Quartz - 2 views

  • Sixty percent of the jobs in the US are information-processing jobs, notes Erik Brynjolfsson, co-author of a recent book about this disruption, Race Against the Machine. It’s safe to assume that almost all of these jobs are aided by machines that perform routine tasks. These machines make some workers more productive. They make others less essential.
  • The turn of the new millennium is when the automation of middle-class information processing tasks really got under way, according to an analysis by the Associated Press based on data from the Bureau of Labor Statistics. Between 2000 and 2010, the jobs of 1.1 million secretaries were eliminated, replaced by internet services that made everything from maintaining a calendar to planning trips easier than ever.
  • Economist Andrew McAfee, Brynjolfsson’s co-author, has called these displaced people “routine cognitive workers.” Technology, he says, is now smart enough to automate their often repetitive, programmatic tasks. ”We are in a desperate, serious competition with these machines,” concurs Larry Kotlikoff, a professor of economics at Boston University. “It seems like the machines are taking over all possible jobs.”
  • ...23 more annotations...
  • In the early 1800′s, nine out of ten Americans worked in agriculture—now it’s around 2%. At its peak, about a third of the US population was employed in manufacturing—now it’s less than 10%. How many decades until the figures are similar for the information-processing tasks that typify rich countries’ post-industrial economies?
  • To see how the internet has disproportionately affected the jobs of people who process information, check out the gray bars dipping below the 0% line on the chart, below. (I’ve adapted this chart to show just the types of employment that lost jobs in the US during the great recession. Every other category continued to add jobs or was nearly flat.)
  • Here’s another clue about what’s been going on in the past ten years. “Return on capital” measures the return firms get when they spend money on capital goods like robots, factories, software—anything aside from people. (If this were a graph of return on people hired, it would be called “Return on labor”.)
  • Notice: the only industry where the return on capital is as great as manufacturing is “other industries”—a grab bag which includes all the service and information industries, as well as entertainment, health care and education. In short, you don’t have to be a tech company for investing in technology to be worthwhile.
  • For many years, the question of whether or not spending on information technology (IT) made companies more productive was highly controversial. Many studies found that IT spending either had no effect on productivity or was even counter-productive. But now a clear trend is emerging. More recent studies show that IT—and the organizational changes that go with it—are doing firms, especially multinationals (pdf), a great deal of good.
  • Winner-take-all and the power of capital to exacerbate inequality
  • One thing all our machines have accomplished, and especially the internet, is the ability to reproduce and distribute good work in record time. Barring market distortions like monopolies, the best software, media, business processes and, increasingly, hardware, can be copied and sold seemingly everywhere at once. This benefits “superstars”—the most skilled engineers or content creators. And it benefits the consumer, who can expect a higher average quality of goods.
  • But it can also exacerbate income inequality, says Brynjolfsson. This contributes to a phenomenon called “skill-biased technological [or technical] change.” “The idea is that technology in the past 30 years has tended to favor more skilled and educated workers versus less educated workers,” says Brynjolfsson. “It has been a complement for more skilled workers. It makes their labor more valuable. But for less skilled workers, it makes them less necessary—especially those who do routine, repetitive tasks.”
  • “Certainly the labor market has never been better for very highly-educated workers in the United States, and when I say never, I mean never,” MIT labor economist David Autor told American Public Media’s Marketplace.
  • The other winners in this scenario are anyone who owns capital.
  • As Paul Krugman wrote, “This is an old concern in economics; it’s “capital-biased technological change”, which tends to shift the distribution of income away from workers to the owners of capital.”
  • Computers are more disruptive than, say, the looms smashed by the Luddites, because they are “general-purpose technologies” noted Peter Linert, an economist at University of Californa-Davis.
  • “The spread of computers and the Internet will put jobs in two categories,” said Andreessen. “People who tell computers what to do, and people who are told by computers what to do.” It’s a glib remark—but increasingly true.
  • In March 2009, Amazon acquired Kiva Systems, a warehouse robotics and automation company. In partnership with a company called Quiet Logistics, Kiva’s combination of mobile shelving and robots has already automated a warehouse in Andover, Massachusetts.
  • This time it’s fasterHistory is littered with technological transitions. Many of them seemed at the time to threaten mass unemployment of one type of worker or another, whether it was buggy whip makers or, more recently, travel agents. But here’s what’s different about information-processing jobs: The takeover by technology is happening much faster.
  • From 2000 to 2007, in the years leading up to the great recession, GDP and productivity in the US grew faster than at any point since the 1960s, but job creation did not keep pace.
  • Brynjolfsson thinks he knows why: More and more people were doing work aided by software. And during the great recession, employment growth didn’t just slow. As we saw above, in both manufacturing and information processing, the economy shed jobs, even as employment in the service sector and professional fields remained flat.
  • Especially in the past ten years, economists have seen a reversal of what they call “the great compression“—that period from the second world war through the 1970s when, in the US at least, more people were crowded into the ranks of the middle class than ever before.
  • There are many reasons why the economy has reversed this “compression,” transforming into an “hourglass economy” with many fewer workers in the middle class and more at either the high or the low end of the income spectrum.
  • The hourglass represents an income distribution that has been more nearly the norm for most of the history of the US. That it’s coming back should worry anyone who believes that a healthy middle class is an inevitable outcome of economic progress, a mainstay of democracy and a healthy society, or a driver of further economic development.
    • anonymous
       
      This is the meaty center. It's what I worry about. The "Middle Class" may just be an anomaly.
  • Indeed, some have argued that as technology aids the gutting of the middle class, it destroys the very market required to sustain it—that we’ll see “less of the type of innovation we associate with Steve Jobs, and more of the type you would find at Goldman Sachs.”
  • So how do we deal with this trend? The possible solutions to the problems of disruption by thinking machines are beyond the scope of this piece. As I’ve mentioned in other pieces published at Quartz, there are plenty of optimists ready to declare that the rise of the machines will ultimately enable higher standards of living, or at least forms of unemployment as foreign to us as “big data scientist” would be to a scribe of the 17th century.
  • But that’s only as long as you’re one of the ones telling machines what to do, not being told by them. And that will require self-teaching, creativity, entrepreneurialism and other traits that may or may not be latent in children, as well as retraining adults who aspire to middle class living. For now, sadly, your safest bet is to be a technologist and/or own capital, and use all this automation to grab a bigger-than-ever share of a pie that continues to expand.
  •  
    "Everyone knows the story of how robots replaced humans on the factory floor. But in the broader sweep of automation versus labor, a trend with far greater significance for the middle class-in rich countries, at any rate-has been relatively overlooked: the replacement of knowledge workers with software. One reason for the neglect is that this trend is at most thirty years old, and has become apparent in economic data only in perhaps the past ten years. The first all-in-one commercial microprocessor went on sale in 1971, and like all inventions, it took decades for it to become an ecosystem of technologies pervasive and powerful enough to have a measurable impact on the way we work."
anonymous

Salt: More confirmation bias for your preferred narrative - 0 views

  • When it comes to health, it’s the hard outcomes we care about. We pay attention to measures like high blood pressure (hypertension) because of the relationship between hypertension and events like heart attacks and strokes. The higher the blood pressure, the greater the risk of these events. The relationship between the two is well established. So when it comes to preventive health, we want to lower blood pressure to reduce the risk of subsequent effects. Weight loss, diet, and exercise are usually prescribed (though often insufficient) to reduce blood pressure. For many, drug treatment is still required.
  • There is reasonable population-level data linking higher levels of salt consumption with higher blood pressure.
  • From a population perspective, interventions that dramatically lower salt intake result in lower blood pressure.
  • ...12 more annotations...
  • the causality between salt consumption, and all of these negative effects, is less clear.
  • So does reducing dietary salt reduce cardiovascular events? That’s the key question.
  • When it comes to clinical practice guidelines, low salt diets are the mainstays of pretty much every set of guidelines on the management of high blood pressure.
  • The evidence supporting the relationship with hard outcomes is robust, but not rock-solid. We don’t have causal data, but we do have considerable epidemiologic evidence to suggest that reducing dietary salt consumption is likely to offer net benefits in the management of hypertension.
  • The vast majority of the salt we eat (75%) is from processed foods. Restaurants are a large source, too.
  • Few foods in their original state are naturally high in salt, and in general, we don’t add that much at the table.
  • Seven studies made up this meta-analysis, including 6,489 patients in total. Three studies looked at those with normal blood pressure, two included patients with high blood pressure, and one was a mixed population, including patients with heart failure. The overall effect? Interventions had small effects on sodium consumption, which led to small effects on blood pressure. There was insufficient information to analyze the effects on cardiovascular disease endpoints.
  • The authors go on to make the following point, which was ignored in the media coverage: Our findings are consistent with the belief that salt reduction is beneficial in normotensive and hypertensive people. However, the methods of achieving salt reduction in the trials included in our review, and other systematic reviews, were relatively modest in their impact on sodium excretion and on blood pressure levels, generally required considerable efforts to implement and would not be expected to have major impacts on the burden of CVD.
  • The authors did not conclude that reducing salt consumption is ineffective.
  • Despite the modest and equivocal results, the authors seem to have lost the narrative on their own research findings: Professor Rod Taylor, the lead researcher of the review, is ‘completely dismayed’ at the headlines that distort the message of his research published today. Having spoken to BBC Scotland, and to CASH, he clarified that the review looked at studies where people were advised to reduce salt intake compared to those who were not and found no differences, this is not because reduced salt doesn’t have an effect but because it’s hard to reduce salt intake for a long time. He stated that people should continue to strive to reduce their salt intake to reduce their blood pressure, but that dietary advice alone is not enough, calling for further government and industry action.
  • The true finding from the Cochrane review is that dietary interventions to reduce salt intake are largely ineffective at reducing salt consumption.
  • Until the data are more clear, you can find the data to support whatever narrative you believe. If you want to demonize salt and ignore other factors that contribute to poor cardiovascular outcomes, you can do that. And if you believe that interventions to reduce salt consumption are misguided and unwarranted, and symptomatic of an overreaching nanny state, then you can find data to support that position, too.
  •  
    "Judging by the recent press reports, the latest Cochrane review reveals that everything we've been told about eating salt, and cardiovascular disease, is wrong."
anonymous

How Basecamp Next got to be so damn fast without using much client-side UI by David of 37signals - 0 views

  • #1: Stacker – an advanced pushState-based engine for sheets
  • The Stacker engine reduces HTTP requests on a per-page basis to a minimum by keeping the layout the same between requests.
  • This means that only the very first request spends time downloading CSS, JavaScript, and image sprites. Every subsequent request will only trigger a single HTTP request to get the HTML that changed and whatever additional images needed.
  • ...14 more annotations...
  • Now Stacker is purposely built for the sheet-based UI that we have. It knows about sheet nesting, how to break out of a sheet chain, and more.
  • #2: Caching TO THE MAX
  • Stacker can only make things appear so fast. If actions still take 500ms to render, it’s not going to have that ultra snappy feel that Basecamp Next does. To get that sensation, your requests need to take less than 100ms. Once our caches are warm, many of our requests take less than 50ms and some even less than 20ms.
  • The only way we can get complex pages to take less than 50ms is to make liberal use of caching.
  • Every stand-alone piece of content is cached in Basecamp Next.
  • This is illustrated in the picture above. If I change todo #45, I’ll have to bust the cache for the todo, the cache for the list, the cache for all todolists, and the cache for the page itself. That sounds terrible on the surface until you realize that everything else is cached as well and can be reused.
  • Thou shall share a cache between pages
  • To improve the likelihood that you’re always going to hit a warm cache, we’re reusing the cached pieces all over the place. There’s one canonical template for each piece of data and we reuse that template in every spot that piece of data could appear.
  • Now this is often quite easy. A todo looks the same regardless of where it appears. Here’s the same todo appearing in three different pages all being pulled from the same cache:
  • Thou shall share a cache between people
  • This is where a sprinkle of JavaScript comes handy. Instead of embedding the logic in the generation of the template, you decorate it after the fact with JavaScript. The block below shows how that happens.
  • It’s a cached list of possible recipients of a new message on a given project, but my name is not in it, even though it’s in the cache. That’s because each checkbox is decorated with a data-subscriber-id HTML attribute that corresponds to their user id. The JavaScript reads a cookie that contains the current user’s id, finds the element with a matching data-subscriber-id, and removes it from the DOM. Now all users can share the same user list for notification without seeing their own name on the list.
  • Combining it all and sprinkling HTTP caching and infinite pages on top
  • None of these techniques in isolation are enough to produce the super snappy page loads we’ve achieved with Basecamp Next, but in combination they get there.
  •  
    "Speed is one of those core competitive advantages that have long-term staying power. As Jeff Bezos would say, nobody is going to wake up 10 years from now and wish their application was slower. Investments in speed are going to pay dividends forever. Now for the secret sauce. Basecamp is so blazingly fast for two reasons:"
anonymous

Leaked report shows high civilian death toll from CIA drone strikes - 0 views

  • The leaked document – which the Bureau obtained from three separate sources – is based on field reports by government officials rather than on media coverage. The Bureau understands that the document is continually updated as attacks occur – although the copy obtained ends with a strike on October 24 2009.
  • Read the full internal Pakistani document.
  • Each tribal area such as North Waziristan is administered by a Political Agent and his assistants. Beneath them are agents known as tehsildars and naibs who gather information when drone strikes occur – the names and identities of those killed, damage to property and so on. Additional information is also drawn from the khassadar - the local tribal police – and from paid informants in villages.
  • ...2 more annotations...
  • Ambassador Rustan Shah Mohmand, who was a senior administrator in the tribal areas for 25 years between 1973 and 1998, cautions that the released file might not be the fullest data available.Noting that Pakistan’s military is responsible for security in FATA, he told the Bureau: ‘Tribal documents might present a broad picture. But any accuracy is dependent on what data the military chooses to release to or withhold from the political agents. In the last eight years, for example, no precise casualty figures have ever been submitted to Pakistan’s parliament.’
  • ‘How come the same civil servants are feeding one kind of data to the Peshawar High Court and another kind of data to the FATA secretariat?’ asked Shahzad Akbar, the Pakistani barrister behind the successful Peshawar case. ‘Are they fudging the numbers based on who was on the receiving end?’US counter-terrorism officials declined to comment on the specifics of the leaked document, though referred the Bureau to recent comments by both President Obama and CIA Director Brennan stating that the US goes to great lengths to limit civilian deaths in covert drone strikes.
  •  
    "A secret document obtained by the Bureau of Investigative Journalism reveals for the first time the Pakistan government's internal assessment of dozens of drone strikes, and shows scores of civilian casualties."
anonymous

Gapminder Desktop: Explore the World of Data from your own Computer - 0 views

  •  
    "To overcome the online requirement, Gapminder Desktop [gapminder.org has recently been released for all operating systems. Based on Adobe AIR technology, this "No Internet Required" software allows people to explore the same data from their own computer, even when there is no Internet connectivity available. In particular, Gapminder Desktop is aimed to teachers and students to bookmark and present global trends in all sort of situations. It comes preloaded with 600+ indicators on health, environment, economy, education, poverty, technology, and so on."
anonymous

Lies, Damned Lies, and Medical Science - 0 views

  • or whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names.
  • One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good?
  • Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began?
  • ...33 more annotations...
  • Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile.
  • That question has been central to Ioannidis’s career. He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research.
  • He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong.
  • He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.
  • “I take all the researchers who visit me here, and almost every single one of them asks the tree the same question,” Ioannidis tells me, as we contemplate the tree the day after the team’s meeting. “‘Will my research grant be approved?’” He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack. And sure enough, he goes on to suggest that an obsession with winning funding has gone a long way toward weakening the reliability of medical research.
  • “I assumed that everything we physicians did was basically right, but now I was going to help verify it,” he says. “All we’d have to do was systematically review the evidence, trust what it told us, and then everything would be perfect.” It didn’t turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science “never minds” are hardly secret. And they sometimes make headlines, as when in recent years large studies or growing consensuses of researchers concluded that mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; or when widely prescribed antidepressants such as Prozac, Zoloft, and Paxil were revealed to be no more effective than a placebo for most cases of depression; or when we learned that staying out of the sun entirely can actually increase cancer risks; or when we were told that the advice to drink lots of water during intense exercise was potentially fatal; or when, last April, we were informed that taking fish oil, exercising, and doing puzzles doesn’t really help fend off Alzheimer’s disease, as long claimed. Peer-reviewed studies have come to opposite conclusions on whether using cell phones can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.
  • “I realized even our gold-standard research had a lot of problems,” he says.
  • This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
  • Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research.
  • In 2005, he unleashed two papers that challenged the foundations of medical research.
  • He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time.
  • The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process—in which journals ask researchers to help decide which studies to publish—to suppress opposing views.
  • sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim.
  • Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.
  • When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have pretty good reason to take more vitamin X, and physicians routinely pass these recommendations on to patients. But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.
  • the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up.
  • But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you.
  • nd these problems are aside from ubiquitous measurement errors (for example, people habitually misreport their diets in studies), routine misanalysis (researchers rely on complex software capable of juggling results in ways they don’t always understand), and the less common, but serious, problem of outright fraud (which has been revealed, in confidential surveys, to be much more widespread than scientists like to acknowledge).
  • And so it goes for all medical studies, he says. Indeed, nutritional studies aren’t the worst. Drug studies have the added corruptive force of financial conflict of interest. The exciting links between genes and various diseases and traits that are relentlessly hyped in the press for heralding miraculous around-the-corner treatments for everything from colon cancer to schizophrenia have in the past proved so vulnerable to error and distortion, Ioannidis has found, that in some cases you’d have done about as well by throwing darts at a chart of the genome.
  • Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it.
  • The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it.
  • Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested. Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case for at least 12 years after the results were discredited.
  • Medical research is not especially plagued with wrongness. Other meta-research experts have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right).
  • Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.” Ioannidis offers a theory for the relatively calm reception. “I think that people didn’t feel I was only trying to provoke them, because I showed that it was a community problem, instead of pointing fingers at individual examples of bad research,” he says. In a sense, he gave scientists an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it—it was something everyone else did.
  • The irony of his having achieved this sort of success by accusing the medical-research community of chasing after success is not lost on him, and he notes that it ought to raise the question of whether he himself might be pumping up his findings.
  • “If I did a study and the results showed that in fact there wasn’t really much bias in research, would I be willing to publish it?” he asks. “That would create a real psychological conflict for me.” But his bigger worry, he says, is that while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job. He fears he won’t in the end have done much to improve anyone’s health. “There may not be fierce objections to what I’m saying,” he explains. “But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”
  • What they’re not trained to do is to go back and look at the research papers that helped make these drugs the standard of care.
  • Tatsioni doesn’t so much fear that someone will carve out the man’s healthy appendix. Rather, she’s concerned that, like many patients, he’ll end up with prescriptions for multiple drugs that will do little to help him, and may well harm him. “Usually what happens is that the doctor will ask for a suite of biochemical tests—liver fat, pancreas function, and so on,” she tells me. “The tests could turn up something, but they’re probably irrelevant. Just having a good talk with the patient and getting a close history is much more likely to tell me what’s wrong.” Of course, the doctors have all been trained to order these tests, she notes, and doing so is a lot quicker than a long bedside chat. They’re also trained to ply the patient with whatever drugs might help whack any errant test numbers back into line.
  • patients often don’t even like it when they’re taken off their drugs, she explains; they find their prescriptions reassuring.
  • “Researchers and physicians often don’t understand each other; they speak different languages,” he says. Knowing that some of his researchers are spending more than half their time seeing patients makes him feel the team is better positioned to bridge that gap; their experience informs the team’s research with firsthand knowledge, and helps the team shape its papers in a way more likely to hit home with physicians.
  • Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding.
  • “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • being wrong in science is fine, and even necessary
  •  
    "Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors-to a striking extent-still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science." By David H. Freedman at The Atlantic on November 2010.
anonymous

Look at This Visualization of Drone Strike Deaths - 0 views

  • The data is legit; it comes from the Bureau of Investigative Journalism, but as Emma Roller at Slate notes, the designers present it weirdly, claiming at the beginning of the interactive that fewer than 2 percent of drone deaths have been "high profile targets," and "the rest are civilians, children and alleged combatants." At the end of the visualization, you find out that a majority of the deaths fall into the "legal gray zone created by the uncertainties of war," as Brian Fung put it at National Journal.
  • But the "legal gray zone" itself is alarming enough—highlighting the lack of transparency surrounding the administration's drone program—as are the discrepancies in total numbers killed. It's between 2,537 and 3,581 (including 411 to 884 civilians) killed since 2004, if you want to go with the BIJ. Or it's between 1,965 and 3,295 people since 2004 (and 261 to 305 civilians), if you want to believe the Counterterrorism Strategy Initiative at the New America Foundation. Or perhaps it's 2,651 since 2006 (including 153 civilians), according to Long War Journal. (The NAF and Long War Journal base estimates on press reports. BIJ also includes deaths reported to the US or Pakistani governments, military and intelligence officials, and other academic sources.)
  •  
    "Pitch Interactive, a California-based data visualization shop, has created a beautiful, if somewhat controversial, visualization of every attack by the US and coalition forces in Pakistan since 2004." Fucking sobering.
1 - 20 of 113 Next › Last »
Showing 20 items per page