Skip to main content

Home/ Long Game/ Group items matching "data" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
anonymous

Restaurant websites: Why are they so awful? Which ones are the absolute worst? - 0 views

  • These nightmarish websites were spawned by restaurateurs who mistakenly believe they can control the online world the same way they lord over a restaurant. "In restaurants, the expertise is in the kitchen and in hospitality in general," says Eng San Kho, a partner at the New York design firm Love and War, which has created several unusually great restaurant sites (more on those in a bit). "People in restaurants have a sense that they want to create an entertainment experience online—that's why disco music starts, that's why Flash slideshows open. They think they can still play the host even here online."
  • When you visit many terrible restaurant websites in succession, it becomes obvious that they're not bad because of neglect or lack of funds—these food purveyors appear to have spent a great deal of money and time to uglify their pages.
  • Masa, the exclusive New York sushi bar, presents you with a pages-long, scroll-bar-free biography of its chef, but (as far as I can tell) no warning that you'll spend $400 or more per person for dinner.
  • ...1 more annotation...
  • I did get a plausible-sounding explanation of the design process from Tom Bohan, who heads up Menupages, the fantastic site that lists menus of restaurants in several large cities. "Say you're a designer and you've got to demo a site you've spent two months creating," Bohan explains. "Your client is someone in their 50s who runs a restaurant but is not very in tune with technology. What's going to impress them more: Something with music and moving images, something that looks very fancy to someone who doesn't know about optimizing the Web for consumer use, or if you show them a bare-bones site that just lists all the information? I bet it would be the former—they would think it's great and money well spent."
  •  
    "Restaurant sites are the product of restaurant culture. These nightmarish websites were spawned by restaurateurs who mistakenly believe they can control the online world the same way they lord over a restaurant. "In restaurants, the expertise is in the kitchen and in hospitality in general," says Eng San Kho, a partner at the New York design firm Love and War, which has created several unusually great restaurant sites (more on those in a bit). "People in restaurants have a sense that they want to create an entertainment experience online-that's why disco music starts, that's why Flash slideshows open. They think they can still play the host even here online.""
anonymous

On SXSW 2013: Vanishing Interfaces, Wearable Tech, & AI's - 0 views

  • The last year has presented us with the vanguard of the Wearable Tech revolution.  Faced with products coming out of Kickstarter and perhaps most prominently, the Google Glass project, the equation is about to become very complex.
  • Krishna spoke specifically to the idea that we need to eliminate as many interfaces as we can in clever ways to enhance the User Experience.  The AI panel emphasized the changes coming to the User Experience as predicted by products like Siri and Google Now.  Together, they paint a picture of how building with an eye to streamlining interfaces with AI modules will build a new future for us - a future that is going to be increasingly filled with various devices.
  • what functions do your products have that best fit on those interfaces?
  • ...4 more annotations...
  • The vanguard is already here in Siri and Google Now.  Each of these represents a type of agent that knows a few things about us.  Google Now will tell you, without your asking, the time it takes to get home from work.  Siri and Google Now both will take your voice input and perform actions that would normally be fairly complicated through a series of interfaces.
  • Not all of us have access to complex and robust voice recognition libraries and a network of camera-equipped cars.  Many of us are, however, in a position to collect or analyze large sets of data.
  • App collects data, sends output to wearable tech.
  • an interface can still exist behind this.  You can open the app and adjust parameters or inputs, but these interfaces become supporting elements, not the primary interface element.  Data that's collected becomes the primary input, done automatically.
  •  
    "Less than six years ago the Apple iPhone blew our minds with a new way to think of something we thought we knew really well:  the cellphone.  A couple years later, tablets crashed the party, giving us a big, rich interface to browse and connect with while leaned back in our recliners.  Technologists like myself have been scrambling all the while to find the best methods to utilize the capabilities of these new interfaces ever since."
anonymous

Where criminals get their guns - 0 views

  • Believe it or not I actually heard her say, “A lot of criminals get their guns from gun stores.” Really? Let’s look at the facts.
  • A 1997 Justice Department survey of more than 18,000 state and federal convicts revealed the truth: • 39.6% of criminals obtained a gun from a friend or family member • 39.2% of criminals obtained a gun on the street or from an illegal source • 0.7% of criminals purchased a gun at a gun show • 1% of criminals purchased a gun at a flea market • 3.8% of criminals purchased a gun from a pawn shop • 8.3% of criminals actually bought their guns from retail outlets
  • Note that less than 9 percent of all guns obtained by criminals in this survey came from retail outlets, hardly “a lot” compared to the almost 40 percent of convicts who obtained guns from friends or family or the almost 40 percent who obtained them illegally on the street.
  • ...3 more annotations...
  • The gun-show loophole? Less than 1 percent of criminal guns came from gun shows. Nothing there, either.
  • The survey data were analyzed and released in 2001 then revised in 2002, but while the eye-opening details are more than 10 years old it’s hard to believe criminal responses have changed much over the last decade.
    • anonymous
       
      On the contrary, this is worth investigating with fresher data. The perception of a culture war against gun owners has caused sales to surge in *spite* of an overall decrease on the proportion of citizens who own guns. In other words: Gun owners are buying more guns while fewer people want to own then. My gut says that may have moved some statistical indicators. Still, the author's point stands. Even without fresh data, you can get a good snapshot of the rough picture.
  • “Universal” background checks won’t work. The fact is we have them now. Anytime a law-abiding citizen purchases a gun from a brick-and-mortar or online retailer, pawn shop owner or private dealer—essentially any licensed dealer who sells more than a handful of firearms per month—he or she must submit to a background examination via the National Instant Check System.
  •  
    "Across all media these days the information is far from accurate when it comes to the culture war waged against gun owners. The topic the other day on a Fox News program was Chicago's "gun problem." Of course everyone knows Chicago's problem is crime committed by thugs who disobey the law, but that didn't stop one woman from insisting "universal" background checks would cut down the number of guns on the city's streets."
anonymous

2,000 Years of Continental Climate Changes - 1 views

  • Thirty-year mean temperatures for the seven PAGES 2k continental-scale regions arranged vertically from north to south. Colors indicate the relative temperature. The most prominent feature of nearly all of the regional temperature reconstructions is the long-term cooling, which ended late in the19th century.
  • North America includes a shorter tree-ring-based and a longer pollen-based reconstruction.
  • Each color band represents a 30-year mean temperature found on each continent.
  •  
    "Climate change is a complicated, and sometimes controversial, global topic.  I really like this data visualization of 2,000 Years of Continental Climate Changes that was included as part of the report published by the "2K Network" of the International Geosphere Biosphere Program (IGBP) Past Global Changes (PAGES) project."
anonymous

"Engagement" Is Not A Metric, It's An Excuse - Occam's Razor by Avinash Kaushik - 0 views

  • There was so much we could measure and so little. As Marketers we have been frustrated with the near constant 2% conversion rates for our websites. We would like to have another metric that justifies our existence, and of course that of our website.
  • The fervor for measuring engagement is even higher for non-ecommerce websites because there is little in terms of Outcomes to measure there.
  • Engagement, that phrase / name, is not a metric that anyone understands and even when used it rarely drives the action / improvement on the website.
  • ...24 more annotations...
  • Because it is not really a metric, it is an excuse.
  • Even as creating engaging experiences on the web is mandatory, the metric called Engagement is simply an excuse for an unwillingness to sit down and identify why a site exists.
  • An excuse for a unwillingness to identify real metrics that measure if your web presence is productive. An excuse for taking a short cut with clickstream data rather than apply a true Web Analytics 2.0 approach to measure success.
  • let's try to understand why in the context of web analytics so many efforts at measuring "engagement" have yielded almost no results:
  • Each business is unique and each website is trying to accomplish something unique.
  • It is nearly impossible to define engagement in a standard way that can be applied across the board.
  • At the heart of it engagement tries to measure something deeply qualitative.
  • One of my personal golden rules is that a metric should be instantly useful. This one is not.
  • Most of all engagement is a proxy for measuring an outcome from a website.
  • Conversion is not enough, as mentioned above, so we try something else. The problem that we'll define engagement as a measure of some kind of outcome but we won't give it the sexy name of engagement.
  • In Summary: The reason engagement has not caught on like wild fire (except in white papers and analyst reports and pundit posts) is that it is a "heart" metric we are trying to measure with "head" data, and engagement is such a utterly unique feeling for each website that it will almost always have a unique definition for each and every website.
  • "So what you are saying is that we should not measure engagement." I am saying you should very very carefully consider the above points, then not take a short cut (or as the American's say, a cop out) and actually define the metric as a Outcome metric (see element three of the trinity ).
  • Here is a process you can follow:
  • Step One: Define why your website exists. What is its purpose? Not a five hundred word essay, rather in fifteen words or less. If it helps complete this statement: "When the crap hits the fan the only purpose of my website is to ……….".
  • Step Two: If you did a great job with it then the above statement contains the critical few metrics (three or less) that will identify exactly how you can measure if your website is successful at delivering against its purpose.
  • Step Three: If you have a ecommerce website then revenue or conversion is probably one of your critical few. But one of the critical few is what your senior management might call engagement. Work hard to define exactly what that metric is (see below for ideas).
  • Step Four: Don't call that metric engagement. Call it by its real name. Don't hide behind a pretty moniker.
  • To stimulate your thought process here are some metrics you can use to measure "customer engagement" (that visitors are engaging with your website):
  • "Are you engaged with us?"
  • Likelihood to recommend website
  • Use primary market research
  • Customer retention over time
  • # of Visits per Unique Visits, Recency of Unique Visitors
  • In Summary : When most people measure "engagement" they have not done due diligence to identify what success means for their online presence. In absence of that hard work they fall into measuring engagement, and then measure something that is hard to action or something that will rarely improve the bottomline. Avoid this at all costs. Think very carefully about what you are measuring if you do measure engagement. If engagement to you is repeat visitors by visitors then call it Visit Frequency, don't call it engagement. Don't sexify, simplify! :) If you want to measure "engagement" then think of new and more interesting ways to measure that (see list above). Engagement at its core a qualitative feeling. It really hard to measure via pure clickstream (web analytics data). Think different.
  •  
    "Measuring "engagement" seems to be an even longer quest for Marketers and Analysts. There was so much we could measure and so little. As Marketers we have been frustrated with the near constant 2% conversion rates for our websites. We would like to have another metric that justifies our existence, and of course that of our website."
anonymous

I Got No Ecommerce. How Do I Measure Success? - Occam's Razor by Avinash Kaushik - 0 views

  • My recommendation: Measure the four metrics that are under the "Visitor Loyalty" button in Google Analytics (or in your favorite web analytics application). Loyalty, Recency, Length of Visit, Depth of Visit.
  • The goal is to use web analytics data to interpret success of a visit to your website.
  • There is one singular reason I loved 'em: they showed distribution and not simply averages for each of the metric!
  • ...18 more annotations...
  • Visitor Loyalty: During the reporting time period how often do "people" ("visitors") visit my website?
  • The number you are used to seeing is "average visits per visitor". That is usually one point something. It hides the truth.
  • For example you update your website ten times each month. If you have 100% loyal visitor base then they should be visiting your website ten times each month. Are they? What's your number? Is it going up over time?
  • Action: 1) Identify a goal for your non-ecommerce website for the # of visits you expect from the traffic to your website in a given time period (say week, month etc). 2) Measure reality using above report. 3) Compare your performance over time to ensure you are making progress, or potentially not as in my case…
  • Recency: How long has it been since a visitor last visited your website? Sounds confusing? Don't worry it is cool (it even has a psychedelic border! :)……
  • As would be the case for a jobs site. Or craigslist. Or any website that wants lots lots of repeat visits. Using this simple report you can now see how you are doing when it comes to the distribution of visitors in terms of their propensity to visit your site.
  • Length of Visit: During the reporting period what is the quality of visit as represented by length of a visitor session in seconds.
  • But it has always been frustrating to me how hard it is to get away from the average and measure the distribution of the visits to check if the average time on site is 50 seconds because one person visited for one second and the other person for 100 seconds. The average hides so much. Here's a better alternative……
  • Ain't that better? I think so. So many things jump out at me, but notice that either I lose 'em right away or if some how I can suck them in for one minute then they tend to stay for a long time. Hurray! I have a better idea of how to interact with my visitors.
  • 1) Identify what the distribution is for your website for length of visits. 2) Think of creative ways to engage traffic – what can I do to keep you for sixty seconds because after that you are mine! 3) Should I start charging more for ads on my site – if I have 'em – after 60 seconds? 4) If you are a support website then should you be embarrassed if 20% of your audience was on the site for more than ten minutes!
  • Depth of Visit: During a given time period what is the distribution of number of pages in each visit to the website.
  • You are used to seeing average page views per visitors, above is something that is a lot more helpful. I was also able to get this exact metric from my indextools implementation…..
  • Action: There has been so much said about this already so I'll spare your the pain. You can easily imagine how wonderful and fantastic this data is as you go about analyzing experience of your customers (and so much more powerful, a million times more, than average page views per visitor!).
  • Recommendations for all of the above metrics:
  • Socialize them to your key stake holders and decision makers to make the realize what is really happening on your website.
  • Absolutely positively work with your leadership to create goals and then measure against goals over time
  • Segment the data! For Visitor Loyalty or Length of Visit what are the most important acquisition sources? What are the keywords that drive valuable segments of traffic to the website?
  • Segmentation is key to insights that will drive action.
  •  
    "A vast majority of discourse in the web analytics world is about orders and conversions and revenue. There is not enough of it about non-ecommerce websites, metrics and KPI's."  - Occam's Razor by Avinash Kaushik
anonymous

When the Worst Performers are the Happiest Employees - At Work - WSJ - 0 views

  • “Low performers often end up with the easiest jobs because managers don’t ask much of them,” he said, so they’re under less stress and they’re more satisfied with their daily work lives.
  • Meanwhile, dedicated and conscientious workers end up staying at the office late, correcting the work of the low performers, and making sure clients or customers are satisfied. This pattern breeds frustration and disengagement in the high performers—and perhaps ultimately drives them to seek work elsewhere. “They feel stressed and undervalued, and it starts to undermine the high performers’ confidence that the organization is a meritocracy,” said Mr. Murphy.
  • To remedy the situation, managers should speak frankly with high and middle performers, ferreting out what frustrations might potentially send them looking for new opportunities. They should also find out what could motivate them to stick around, he added.
    • anonymous
       
      Sadly, this is very hard to do in some environments. To me, it's a matter of metrics and truly understanding your teams. For instance: It could be that buying people tablets for work (with the unspoken nod that it'll be fun to play with) will placate some, but doing so is a political nightmare. Same with almost any fringe item. They're hard to justify and even harder to know if it's well spent money since job satisfaction is in this 'nebulous zone' with little data. But, as I've seen happen, someone really valuable will leave and an org will effectively 'lose' way more productivity than buying tons of tablets would have cost.
  • ...2 more annotations...
  • In the remaining 58% of organizations surveyed, high performers were the most engaged, or engagement scores were about equal among the employees. In the rarest cases, Murphy said, the middle performers were the most engaged.  That segment of the workforce—the employees who are neither superstars nor slackers—tends to be ignored by managers, he said.
  • Low performers were also more likely than the other two groups to recommend their company as a “great organization to work for.” And in many cases, they didn’t even realize they were low performers. When asked whether the employees at the company “all live up to the same standards,” low performers were far more likely to agree with the statement than their higher-achieving counterparts.
  •  
    "A new study finds that, in 42% of companies, low performers actually report being more engaged - more motivated and more likely to enjoy working at their organization, for example - than middle and high performers do." - Thanks, Erik. Although I don't know why I should *thank* you for this data. :)
anonymous

The Inequality That Matters - 1 views

  • there’s more confusion about this issue than just about any other in contemporary American political discourse.
  • The reality is that most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize. If our economic churn is bound to throw off political sparks, whether alarums about plutocracy or something else, we owe it to ourselves to seek out an accurate picture of what is really going on.
  • Let’s start with the subset of worries about inequality that are significantly overblown.
  • ...107 more annotations...
  • Most analyses of income inequality neglect two major points.
  • First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well.
  • by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • Compare these circumstances to those of 1911, a century ago. Even in the wealthier countries, the average person had little formal education, worked six days a week or more, often at hard physical labor, never took vacations, and could not access most of the world’s culture.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor.
  • There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally.
  • their constituents bear no animus toward rich people, only toward undeservedly rich people.
    • anonymous
       
      Which is how the policy can be reframed to the benefit of those that understand this more cleanly.
  • in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise.
    • anonymous
       
      Provincialism!
  • The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however
  • All that said, income inequality does matter—for both politics and the economy.
  • To see how, we must distinguish between inequality itself and what causes it. But first let’s review the trends in more detail.
  • Income inequality has been rising in the United States, especially at the very top.
  • The data show a big difference between two quite separate issues
  • income growth at the very top
  • greater inequality throughout the distribution
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • Caution is in order, but the overall trend seems robust. Similar broad patterns are indicated by different sources, such as studies of executive compensation. Anecdotal observation suggests extreme and unprecedented returns earned by investment bankers, fired CEOs, J.K. Rowling and Tiger Woods.
  • At the same time, wage growth for the median earner has slowed since 1973.
  • But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary.
  • Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat.
  • Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all.
  • And so we come again to the gains of the top earners, clearly the big story told by the data.
  • It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there.
  • A threshold earner is someone who seeks to earn a certain amount of money and no more.
  • If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • It’s not obvious what causes the percentage of threshold earners to rise or fall, but it seems reasonable to suppose that the more single-occupancy households there are, the more threshold earners there will be, since a major incentive for earning money is to use it to take care of other people with whom one lives.
  • For a variety of reasons, single-occupancy households in the United States are at an all-time high.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice.
  • Many studies suggest that above a certain level more money brings only marginal increments of happiness.
  • What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality.
  • Why is the top 1 percent doing so well?
  • Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top.
  • The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount.
  • The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance.
  • After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top.
  • Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened.
  • There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable.
  • We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • If we are looking for objectionable problems in the top 1 percent of income earners, much of it boils down to finance and activities related to financial markets. And to be sure, the high incomes in finance should give us all pause.
  • some investors opt for a strategy of betting against big, unexpected moves in market prices.
  • Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • To understand how this strategy works, consider an example from sports betting.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad
  • Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs. To this mix we can add the fact that many money managers are investing other people’s money.
  • If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses.
  • And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton.
  • For the people actually investing the money, there’s barely any downside risk other than having to quit the party early.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles.
  • They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors.
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside.
  • Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly.
  • For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • In short, there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good.
  • But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods.
  • In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • The more one studies financial theory, the more one realizes how many different ways there are to construct a “going short on volatility” investment position.
  • In some cases, traders may not even know they are going short on volatility. They just do what they have seen others do. Their peers who try such strategies very often have Jaguars and homes in the Hamptons. What’s not to like?
  • The upshot of all this for our purposes is that the “going short on volatility” strategy increases income inequality.
  • In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance.
  • If you’re wondering, right before the Great Depression of the 1930s, bank profits and finance-related earnings were also especially high.8
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth.
  • Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have.
  • It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw.
  • In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades?
  • Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it.
  • And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • A key lesson to take from all of this is that simply railing against income inequality doesn’t get us very far.
  • We have to find a way to prevent or limit major banks from repeatedly going short on volatility at social expense. No one has figured out how to do that yet.
  • It remains to be seen whether the new financial regulation bill signed into law this past summer will help.
  • The bill does have positive features.
  • First, it forces banks to put up more of their own capital, and thus shareholders will have more skin in the game, inducing them to curtail their risky investments.
  • Second, it also limits the trading activities of banks, although to a currently undetermined extent (many key decisions were kicked into the hands of future regulators).
  • Third, the new “resolution authority” allows financial regulators to impose selective losses, for instance, to punish bondholders if they wish.
  • We’ll see if these reforms constrain excess risk-taking in the long run. There are reasons for skepticism.
  • Most of all, the required capital cushions simply aren’t that high, so a big enough bet against unexpected outcomes still will yield more financial upside than downside
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control
  • It’s also not clear how well regulators can identify risky assets.
  • Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies.
  • Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism.
  • It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem.
  • The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy.
  • More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking.
  • Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense.
  • We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • That’s an underappreciated way to think about our modern, wealthy economy: Smart people have greater reach than ever before, and nothing really can go so wrong for them.
  • How about a world with no bailouts? Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice.
  • Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions.
  • Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty.
  • Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • So what will happen next?
  • One worry is that banks are currently undercapitalized and will seek out or create a new bubble within the next few years, again pursuing the upside risk without so much equity to lose.
  • A second perspective is that banks are sufficiently chastened for the time being but that economic turmoil in Europe and China has not yet played itself out, so perhaps we still have seen only the early stages of what will prove to be an even bigger international financial crisis.
  • A third view is perhaps most likely. We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are.
  • Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe.
  • They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
    • anonymous
       
      Painfully straightforward.
  •  
    "Does growing wealth and income inequality in the United States presage the downfall of the American republic? Will we evolve into a new Gilded Age plutocracy, irrevocably split between the competing interests of rich and poor? Or is growing inequality a mere bump in the road, a statistical blip along the path to greater wealth for virtually every American? Or is income inequality partially desirable, reflecting the greater productivity of society's stars?"
anonymous

Freakonomics: What Went Wrong? - 0 views

  • Oster’s work stirred debate for a few years in the epidemiological literature, but eventually she admitted that the subject-matter experts had been right all along. One of Das Gupta’s many convincing counterpoints was a graph showing that in Taiwan, the ratio of boys to girls was near the natural rate for first and second babies (106:100) but not for third babies (112:100); this pattern held up with or without hepatitis B. In a follow-up blog post, Levitt applauded Oster for bravery in admitting her mistake, but he never credited Das Gupta for her superior work. Our point is not that Das Gupta had to be right and Oster wrong, but that Levitt and Dubner, in their celebration of economics and economists, suspended their critical thinking.
  • In SuperFreakonomics, Levitt and Dubner use a back-of-the-envelope calculation to make the contrarian claim that driving drunk is safer than walking drunk, an oversimplified argument that was picked apart by bloggers. The problem with this argument, and others like it, lies in the assumption that the driver and the walker are the same type of person, making the same kinds of choices, except for their choice of transportation.
  • Such all-else-equal thinking is a common statistical fallacy. In fact, driver and walker are likely to differ in many ways other than their mode of travel. What seem like natural calculations are stymied by the impracticality, in real life, of changing one variable while leaving all other variables constant.
  • ...8 more annotations...
  • This unavoidable tradeoff between false positive and false negative errors is a well-known property of all statistical-prediction applications. Circling back to check all the factors involved in the problem might have helped the authors avoid this mistake.
  • How could an experienced journalist and a widely respected researcher slip up in so many ways? Some possible answers to this question offer insights for the would-be pop-statistics writer.
  • Leave friendship at the door: We attribute many of these errors to the structure of the authors’ collaboration, which, from what we can tell, relies on an informal social network that has many potential failure points.
  • Don’t sell yourself short: Perhaps Levitt’s admirable modesty—he has repeatedly attributed his success to luck and hard work rather than genius—has led him astray. If he feels he is surrounded by economists more exceptional and brilliant than he is, he may let their assertions stand without challenge.
  • Maintain checks and balances: A solid collaboration requires each side to check and balance the other side. Although there’s no way we can be sure, perhaps, in some of the cases described above, there was a breakdown in the division of labor when it came to investigating technical points.
  • Take your time: Success comes at a cost: The constraints of producing continuous content for a blog or website and meeting publisher’s deadlines may have adverse effects on accuracy.
  • Be clear about where you’re coming from: Levitt’s publishers, along with Dubner, characterize him as a “rogue economist.”
  • Use latitude responsibly: When a statistician criticizes a claim on technical grounds, he or she is declaring not that the original finding is wrong but that it has not been convincingly proven. Researchers—even economists endorsed by Steven Levitt—can make mistakes. It may be okay to overlook the occasional mistake in the pursuit of the larger goal of understanding the world. But once one accepts this lower standard—science as plausible stories or data-supported reasoning, rather than the more carefully tested demonstrations that are characteristic of Levitt’s peer-reviewed research articles—one really has to take extra care, consider all sides of an issue, and look out for false positive results.
  •  
    In our analysis of the Freakonomics approach, we encountered a range of avoidable mistakes, from back-of-the-envelope analyses gone wrong to unexamined assumptions to an uncritical reliance on the work of Levitt's friends and colleagues. This turns accessibility on its head: Readers must work to discern which conclusions are fully quantitative, which are somewhat data driven and which are purely speculative.
anonymous

The Trouble With Intuition - 0 views

  • Some 45 years after Wise found the private edition of the Sonnets, two British book dealers, named John Carter and Graham Pollard, decided to investigate his finds. They re-examined the Browning volume and identified eight reasons why its existence was inconsistent with typical practices of the era. For example, none of the copies had been inscribed by the author, none were trimmed and bound in the customary way, and the Brownings never mentioned the special private printing in any letters, memoirs, or other documents.
  • The 1847 edition had to be a fake.
  • According to Gladwell, those experts' intuitions proved correct, and the initial scientific tests that authenticated the statue turned out to have been faulty.
  • ...18 more annotations...
  • Cases in which forgeries that intuitively appear real but later are discovered through analysis to be frauds are fairly common in the art world.
  • Gladwell's message in Blink has been interpreted by some readers as a broad license to rely on intuition and dispense with analysis, which can lead to flawed decisions.
  • Intuition means different things to different people. To some it refers to a sudden flash of insight, or even the spiritual experience of discovering a previously hidden truth.
  • In its more mundane form, intuition refers to a way of knowing and deciding that is distinct from and complements logical analysis.
  • The idea that hunches can outperform reason is neither unique nor original to Malcolm Gladwell, of course. Most students and professors have long believed that, when in doubt, test-takers should stick with their first answers and "go with their gut." But data show that test-takers are more than twice as likely to change an incorrect answer to a correct one than vice versa.
  • Intuition does have its uses, but it should not be exalted above analysis.
  • There is, moreover, one class of intuitions that consistently leads us astray—dangerously astray. These intuitions are stubbornly resistant to analysis, and it is exactly these intuitions that we shouldn't trust. Unfortunately, they are also the intuitions that we find the most compelling: mistaken intuitions about how our own minds work.
  • The finding that people fail to notice unexpected events when their attention is otherwise engaged is interesting. What is doubly intriguing is the mismatch between what we notice and what we think we will notice.
  • If you believe you will notice unexpected events regardless of how much of your attention is devoted to other tasks, you won't be vigilant enough for possible risks.
  • In the vast majority of cases in which DNA evidence exonerated a death-row inmate, the original conviction was based largely on the testimony of a confident eyewitness with a vivid memory of the crime. Jurors (and everyone else) tend to intuitively trust that when people are certain, they are likely to be right.
  • Study after study has shown that memories of important events like those are no more accurate than run-of-the-mill memories. They are more vivid, and we are therefore more confident about their accuracy, but that confidence is largely an illusion.
  • The most troublesome aspect of intuition may be the misleading role it plays in how we perceive patterns and identify causal relationships.
  • To determine whether two events are truly associated, we must consider how frequently each one occurs by itself, and how frequently they occur together. With just one or a few anecdotes, that's impossible, so it pays to err on the side of caution when inferring the existence of an association from a small number of examples.
  • We can rely on accumulated data, but too often we don't. Why not? Because our intuitions respond to vivid stories, not abstract statistics.
  • But more than a dozen large-scale epidemiological studies, involving hundreds of thousands of subjects, have shown that children who were vaccinated are no more likely to be diagnosed with autism than are children who were not vaccinated. In other words, there is no association between vaccination and autism. And in the absence of an association, there cannot be a causal link.
  • Many people who believe that vaccination can cause autism are aware of those data. But the intuitive cause-detector in our minds is driven by stories, not statistics, and once a compelling story leads us to ascribe an effect to a cause, we can hold to that belief as stubbornly as when we trust in our ability to talk on a phone while driving—or to spot a person wearing a gorilla suit.
  • Gladwell surrounds his arguments with examples that suggest an association, letting his readers infer the causal relationships he wants to convey.
  • The kouros example is effective because it capitalizes on our tendency to generalize from a single positive association, leading to the conclusion that intuition trumps reason. But in this case, a bit of thought would show that conclusion to be unlikely, even within the confined realm of art fakery. Think about how often experts throughout history have been duped by forgers because intuition told them that they were looking at the real thing. It is ironic that Gladwell (knowingly or not) exploits one of the greatest weaknesses of intuition—our tendency to blithely infer cause from anecdotes—in making his case for intuition's extraordinary power.
  •  
    By Daniel J. Simons and Christopher F. Chabris at The Chronicle Review - The Chronicle of Higher Education on May 30, 2010.
anonymous

The Stress of a Busy Environment Helps Mice Beat Back Cancer - 0 views

  • Whereas most people live in fairly safe environments, with plenty of food and some degree of social interaction, “our data suggests that we shouldn’t just be avoiding stress, we should be living more socially and physically challenging lives,” During says [Scientific American].
  • Mice were then injected with tumor cells, which led to malignancies in all of the control animals within 15 days… The rate of tumor formation in animals living in the enriched environment was significantly delayed, and 15 percent had not developed tumors after nearly three weeks; when tumors were visible, they were 43 percent smaller than the lesions on control animals
  •  
    'Whereas most people live in fairly safe environments, with plenty of food and some degree of social interaction, "our data suggests that we shouldn't just be avoiding stress, we should be living more socially and physically challenging lives," During says.' By Andrew Moseman at 80beats (Discover Magazine) on July 9, 2010.
anonymous

Europeans Bury 'Digital DNA' Inside Mountain - 0 views

  • In a secret bunker known as the Swiss Fort Knox deep in the Swiss Alps, European researchers recently deposited a “digital genome” that will provide the blueprint for future generations to read data stored using defunct technology.
  • The capsule is the culmination of the four-year “Planets” project, an 15 million-euro ($18.49 million) project which draws on the expertise of 16 European libraries, archives and research institutions, to preserve the world’s digital assets as hardware and software.
  • “Unlike hieroglyphics carved in stone or ink on parchment, digital data has a shelf life of years not millennia,” said Andreas Rauber, a professor at the University of Technology of Vienna, which is a partner in the project.
  • ...1 more annotation...
  • People will be puzzled at what they find when they open the time capsule, said Rauber. “In 25 years people will be astonished to see how little time must pass to render data carriers unusable because they break or because you don’t have the devices anymore,” he said. “The second shock will probably be what fraction of the objects we can’t use or access in 25 years and that’s hard to predict.”
  •  
    At Sputnik Laboratory on June 15, 2010
anonymous

Twelve facts about guns and mass shootings in the United States - 0 views

  • If roads were collapsing all across the United States, killing dozens of drivers, we would surely see that as a moment to talk about what we could do to keep roads from collapsing. If terrorists were detonating bombs in port after port, you can be sure Congress would be working to upgrade the nation’s security measures. If a plague was ripping through communities, public-health officials would be working feverishly to contain it. 
  • Only with gun violence do we respond to repeated tragedies by saying that mourning is acceptable but discussing how to prevent more tragedies is not.
  • “Since 1982, there have been at least 61 mass murders carried out with firearms across the country, with the killings unfolding in 30 states from Massachusetts to Hawaii,” they found. And in most cases, the killers had obtained their weapons legally:
  • ...13 more annotations...
  • 15 of the 25 worst mass shootings in the last 50 years took place in the United States. Time has the full list here. In second place is Finland, with two entries.
  • Lots of guns don’t necessarily mean lots of shootings, as you can see in Israel and Switzerland.*
  • *Correction: The info is out-of-date, if not completely wrong. Israel and Switzerland have tightened their gun laws substantially, and now pursue an entirely different approach than the United States. More details here. I apologize for the error.
  • Of the 11 deadliest shootings in the US, five have happened from 2007 onward.
  • Kieran Healy, a sociologist at Duke University, made this graph of “deaths due to assault” in the United States and other developed countries. We are a clear outlier.
  • “The most striking features of the data are (1) how much more violent the U.S. is than other OECD countries (except possibly Estonia and Mexico, not shown here), and (2) the degree of change—and recently, decline—there has been in the U.S. time series considered by itself.”
  • In a subsequent post, Healy drilled further into the numbers and looked at deaths due to assault in different regions of the country. Just as the United States is a clear outlier in the international context, the South is a clear outlier in the national context:
  • “For all the attention given to America’s culture of guns, ownership of firearms is at or near all-time lows,” writes political scientist Patrick Egan. The decline is most evident on the General Social Survey, though it also shows up on polling from Gallup, as you can see on this graph:
  • The Harvard Injury Control Research Center assessed the literature on guns and homicide and found that there’s substantial evidence that indicates more guns means more murders. This holds true whether you’re looking at different countries or different states. Citations here.
  • Higher populations, more stress, more immigrants, and more mental illness were not correlated with more deaths from gun violence. But one thing he found was, perhaps, perfectly predictable: States with tighter gun control laws appear to have fewer gun-related deaths. The disclaimer here is that correlation is not causation. But correlations can be suggestive:
  • Since 1990, Gallup has been asking Americans whether they think gun control laws should be stricter. The answer, increasingly, is that they don’t. “The percentage in favor of making the laws governing the sale of firearms ‘more strict’ fell from 78% in 1990 to 62% in 1995, and 51% in 2007,” reports Gallup. “In the most recent reading, Gallup in 2010 found 44% in favor of stricter laws. In fact, in 2009 and again last year, the slight majority said gun laws should either remain the same or be made less strict.”
  • An August CNN/ORC poll asked respondents whether they favor or oppose a number of specific policies to restrict gun ownership. And when you drill down to that level, many policies, including banning the manufacture and possession of semi-automatic rifles, are popular.
  • Shootings don’t tend to substantially affect views on gun control. That, at least, is what the Pew Research Center found:
  •  
    "When we first collected much of this data, it was after the Aurora, Colo. shootings, and the air was thick with calls to avoid "politicizing" the tragedy. That is code, essentially, for "don't talk about reforming our gun control laws." Let's be clear: That is a form of politicization. When political actors construct a political argument that threatens political consequences if other political actors pursue a certain political outcome, that is, almost by definition, a politicization of the issue. It's just a form of politicization favoring those who prefer the status quo to stricter gun control laws."
anonymous

American Gun Deaths to Exceed Traffic Fatalities by 2015 - 1 views

  • The fall in traffic deaths resulted from safer vehicles, restricted privileges for young drivers and seat-belt and other laws, he said. By contrast, “we’ve made policy decisions that have had the impact of making the widest array of firearms available to the widest array of people under the widest array of conditions.” While fewer households have guns, people who own guns are buying more of them, he said.
  • Traffic fatalities in 2011 were the lowest since 1949, according to the National Highway Traffic Safety Administration. Although drivers in the U.S. logged fewer miles than in 2010, the fatality rate was the lowest on record, 1.1 deaths for each 100 million vehicle miles driven.
  •  
    "Guns and cars have long been among the leading causes of non-medical deaths in the U.S. By 2015, firearm fatalities will probably exceed traffic fatalities for the first time, based on data compiled by Bloomberg."
anonymous

New cosmic background radiation map challenges some foundations of cosmology | KurzweilAI - 0 views

  • The fluctuations in the CMB temperatures at large angular scales do not match those predicted by the standard model in physics — their signals are not as strong as expected from the smaller scale structure revealed by Planck.
  • An asymmetry in the average temperatures on opposite hemispheres of the sky runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.
  • A cold spot extends over a patch of sky that is much larger than expected.
  • ...10 more annotations...
  • Dark energy, a mysterious force thought to be responsible for accelerating the expansion of the Universe, accounts for less than previously thought.
  • One way to explain the anomalies is to propose that the Universe is in fact not the same in all directions on a larger scale than we can observe.
  • In this scenario, the light rays from the CMB may have taken a more complicated route through the Universe than previously understood, resulting in some of the unusual patterns observed today.
  • The Planck data also set a new value for the rate at which the Universe is expanding today, known as the Hubble constant. At 67.15 kilometers per second per megaparsec, this is significantly less than the current standard value in astronomy. The data imply that the age of the Universe is 13.82 billion years.
    • anonymous
       
      Whoa. 13.82 billion?
  • oldest light in our Universe, imprinted on the sky when it was just 380 000 years old.
  • At that time, the young Universe was filled with a hot dense soup of interacting protons, electrons and photons at about 2700ºC.
  • This cosmic microwave background (CMB) — shows tiny temperature fluctuations that correspond to regions of slightly different densities at very early times, representing the seeds of all future structure: the stars and galaxies of today.
  • According to the standard model of cosmology, the fluctuations arose immediately after the Big Bang and were stretched to cosmologically large scales during a brief period of accelerated expansion known as inflation.
  • The asymmetry and the cold spot had already been hinted at with Planck’s predecessor, NASA’s WMAP mission, but were largely ignored because of lingering doubts about their cosmic origin.
  • “The fact that Planck has made such a significant detection of these anomalies erases any doubts about their reality; it can no longer be said that they are artefacts of the measurements. They are real and we have to look for a credible explanation,” says Paolo Natoli of the University of Ferrara, Italy.
  •  
    "The most detailed map ever created of the cosmic microwave background - the relic radiation from the Big Bang - acquired by ESA's Planck space telescope, has been released, revealing features that challenge the foundations of our current understanding of the Universe and may require new physics."
anonymous

Speed Up SSD & Optimize For Performance with 9 Quality Tweaks - 0 views

  • 1) Enable Write Caching in Windows 7
  • This tweak would enable the write caching on your SSD which helps to speed up SSD by a small margin. This tweak would force windows to cache the write commands sent to the SSD, to be stored in the memory which is many times faster and thus would result in a much faster operation.
  • To do this navigate to, Computer > Properties > Device Manager >Disk Drive Now, right click and select Properties of your SSD and click on policies tab and just select the Enable write caching in Windows option, Click Apply > OK and you’re done.
  • ...13 more annotations...
  • 2) Speed Up SSD by Using RAM Cache
  • Begin the process by downloading Fancy Cache (~2MB) software. After installation and starting up the program, the software interface would list out the storage mediums connected to your pc. Select your SSD from the list and configure a cache size for it (Refer the image). The cache size may be set to suit your needs. We’ve allocated currently 3192 MB of RAM as the cache and the defer caching has been also set to enabled. After setting the cache size for your SSD and related options on fancy cache, click Start Caching and you’re ready with the caching setup for your SSD.
  • Optimizing and Maintaining Your SSD
  • The TRIM Command The very first step after setting up an SSD is to enable the TRIM Command in Windows. Windows 8 has the TRIM command already enabled if you are using an SSD. It is important to have the TRIM command enabled on your system as it helps to maintain the SSD’s life by optimizing the garbage collection in windows.
  • 3) Enabling TRIM Command to Optimize SSD From the Start Menu, type CMD in the search box. Right click the command prompt icon and choose Run as Administrator. Now type fsutil behavior query disabledeletenotify and press Enter. If its shows you disabledeletenotify = 0, the TRIM command is enabled in windows and you don’t have to make any modifications. If not, it would display, disabledeletenotify = 1. If TRIM is not enabled, type fsutil behavior set disabledeletenotify 0 The TRIM Command would now be set to enabled.
  • 4) Should You enable hibernation while using an SSD in Windows ?
  • Drive Defragmentation in Windows 7
  • Drive defragmentation helps in organizing the fragmented data and helps in improving the performance of a hard disk. But drive defragmentation doesn’t speed up SSD, as these drives doesn’t contain any rotating parts and defragmenting a drive would involve the transfer of data i.e. more read/write cycles which actually shortens the life span of the SSD. So, its always good to disable drive defragmentation in windows.
  • 6) Turn Off Super fetch, Prefetch in Windows
  • So its better to disable them if you have 4GB or less memory. If you have plenty of RAM installed, enabling them might give an iota of boost and would speed up SSD performance.
  • Bring up the Registry Editor in Windows by Typing regedit on RUN Window (WIN + R) and pressing Enter. Now navigate to “HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSessionManagerMemory ManagementPrefetchParameters“ You’ll see Enable Prefetcher and Enable Superfetch options being list out on the right window pane. Double click each and set the value from 3 to 0. Now restart your system for the changes to be enabled.
  • SSDs are speedier than hard drives and have an access time of 0.1 milli second. Drive indexing results in increasing the number of file write operations doesn’t really speed up SSD and its better to have it turned off. Open My Computer and just right Click Your SSD and choose properties. Just untick the Allow files on this drive to have contents indexed in addition to file properties option. You’ll be prompted with warnings just after clicking Apply. Proceed by pressing Ignore All and it would show you a processing window and would take a few minutes for the changes to be applied.
  • 9) Disabling Drive Indexing to Optimize SSD
  •  
    "Users who crave for more performance out of their system would definitely consider investing a pretty decent ssd to speed up their PC. Unlike hard drives SSD's work in an different manner. Users who upgrade from hard drives to SSD's often get confounded with doubts whether they need to do the required maintenance which they might have been doing with the hard drives, to speed up ssd and optimize its performance."
anonymous

Jeff Dean facts: How a Google programmer became the Chuck Norris of the Internet. - Slate Magazine - 0 views

  • Meanwhile, in the shadows of these giants—all of whom have graduated from day-to-day gruntwork—are legions of faceless developers who tap away at keyboards every day to build the products and systems we all use.
  • In the tech world, more so than in most other industries, those employees are far from interchangeable. A great accountant might save you 5 percent on your taxes. A great baseball player will reach base just a bit more often than a mediocre baseball player. But a great software developer can do in a week what might take months for a team of 10 lesser developers—the difference is exponential rather than marginal.
  • As a high schooler, he wrote software for analyzing vast sets of epidemiological data that he says was “26 times faster” than what professionals were using at the time. The system, called Epi Info, has been adopted by the Centers for Disease Control and translated into 13 languages.
  • ...2 more annotations...
  • Google’s founding ideas came from Page and Brin, world-class developers in their own right. In the late 1990s they built PageRank, an algorithm for returning the most relevant results to a given search query. The focus on relevance put Google on a course to surpass Yahoo, AltaVista, and the day’s other leading search engines. But as the upstart grew in popularity, it faced a tremendous computing challenge. “We couldn’t deploy machines fast enough” to keep up with demand, Dean recalls.
  • Ghemawat helped lead a team that built the Google File System, which allowed for huge files to be efficiently distributed across thousands of cheap servers. Then Dean and Ghemawat developed a programming tool called MapReduce that allowed developers to efficiently process gargantuan data sets with those machines working in parallel.
  •  
    "The programs that Dean was instrumental in building-MapReduce, BigTable, Spanner-are not the ones most Google users associate with the company. But they're the kind that made Google-and, consequently, much of the modern Web as we know it-possible. And the projects he's working on now have the potential to revolutionize information technology once again."
anonymous

Hellfire, Morality and Strategy - 2 views

  • On one side of this dispute are those who regard them simply as another weapon of war whose virtue is the precision with which they strike targets.
  • On the other side are those who argue that in general, unmanned aerial vehicles are used to kill specific individuals, frequently civilians, thus denying the targeted individuals their basic right to some form of legal due process.
  • Let's begin with the weapons systems, the MQ-1 Predator and the MQ-9 Reaper. The media call them drones, but they are actually remotely piloted aircraft. Rather than being in the cockpit, the pilot is at a ground station, receiving flight data and visual images from the aircraft and sending command signals back to it via a satellite data link.
  • ...28 more annotations...
  • Most airstrikes from these aircraft use Hellfire missiles, which cause less collateral damage.
  • Unlike a manned aircraft, unmanned aerial vehicles can remain in the air for an extended period of time -- an important capability for engaging targets that may only present a very narrow target window. This ability to loiter, and then strike quickly when a target presents itself, is what has made these weapons systems preferable to fixed wing aircraft and cruise missiles.
  • The Argument Against Airstrikes
  • The modern battlefield -- and the ancient as well -- has been marked by anonymity. The enemy was not a distinct individual but an army, and the killing of soldiers in an enemy army did not carry with it any sense of personal culpability. In general, no individual soldier was selected for special attention, and his death was not an act of punishment. He was killed because of his membership in an army and not because of any specific action he might have carried out.
  • This distinguishes unmanned aerial vehicles from most weapons that have been used since the age of explosives began.
  • There are those who object to all war and all killing; we are not addressing those issues here. We are addressing the arguments of those who object to this particular sort of killing. The reasoning is that when you are targeting a particular individual based on his relationships, you are introducing the idea of culpability, and that that culpability makes the decision-maker -- whoever he is -- both judge and executioner, without due process.
  • Again excluding absolute pacifists from this discussion, the objection is that the use of unmanned aerial vehicles is not so much an act of war as an act of judgment and, as such, violates international law that requires due process for a soldier being judged and executed. To put it simply, the critics regard what they call drone strikes as summary executions, not acts of war.
  • The Argument for Airstrikes
  • The counterargument is that the United States is engaged in a unique sort of war.
  • The primary unit is the individual, and the individuals -- particularly the commanders -- isolate themselves and make themselves as difficult to find as possible. Given their political intentions and resources, sparse forces dispersed without regard to national boundaries use their isolation as the equivalent of technological stealth to make them survivable and able to carefully mount military operations against the enemy at unpredictable times and in unpredictable ways.
  • The argument for using strikes from unmanned aerial vehicles is that it is not an attack on an individual any more than an artillery barrage that kills a hundred is an attack on each individual. Rather, the jihadist movement presents a unique case in which the individual jihadist is the military unit.
  • The argument in favor of using unmanned aerial vehicle strikes is, therefore, that the act of killing the individual is a military necessity dictated by the enemy's strategy and that it is carried out with the understanding that both intelligence and precision might fail, no matter how much care is taken.
  • It would seem to me that these strikes do not violate the rules of war and that they require no more legal overview than was given in thousands of bomber raids in World War II.
  • Ignoring the question of whether jihadist operations are in accordance with the rules and customs of war, their failure to carry a "fixed distinctive sign recognizable at a distance" is a violation of both the Hague and Geneva conventions. This means that considerations given to soldiers under the rules of war do not apply to those waging war without insignia.
  • Open insignia is fundamental to the rules of war. It was instituted after the Franco-Prussian war, when French snipers dressed as civilians fired on Germans. It was viewed that the snipers had endangered civilians because it was a soldier's right to defend himself and that since they were dressed as civilians, the French snipers -- not the Germans -- were responsible for the civilian deaths.
  • the onus on ascertaining the nature of the target rests with the United States, but if there is error, the responsibility for that error rests with jihadists for not distinguishing themselves from civilians.
  • There is of course a greater complexity to this: attacking targets in countries that are not in a state of war with the United States and that have not consented to these attacks. For better or worse, the declaration of war has not been in fashion since World War II. But the jihadist movement has complicated this problem substantially.
  • In a method of war where the individual is the prime unit and where lack of identification is a primary defensive method, the conduct of intelligence operations wherever the enemy might be, regardless of borders, follows. So do operations to destroy enemy units -- individuals. If a country harbors such individuals knowingly, it is an enemy. If it is incapable of destroying the enemy units, it forfeits its right to claim sovereignty since part of sovereignty is a responsibility to prevent attacks on other countries.
  • If we simply follow the logic we laid out here, then the critics of unmanned aerial vehicle strikes have a weak case. It is not illegitimate to target individuals in a military force like the jihadist movement, and international law holds them responsible for collateral damage, not the United States.
  • since al Qaeda tried in the past to operate in the United States itself, and its operatives might be in the United States, it logically follows that the United States could use unmanned aerial vehicles domestically as well. Citizenship is likewise no protection from attacks against a force hostile to the United States.
  • There are two points I have been driving toward.
  • The first is that the outrage at targeted killing is not, in my view, justified on moral or legal grounds.
  • The second is that in using these techniques, the United States is on a slippery slope because of the basis on which it has chosen to wage war.
  • The enemy strategy is to draw the United States into an extended conflict that validates its narrative that the United States is permanently at war with Islam. It wants to force the United States to engage in as many countries as possible. From the U.S. point of view, unmanned aerial vehicles are the perfect weapon because they can attack the jihadist command structure without risk to ground forces. From the jihadist point of view as well, unmanned aerial vehicles are the perfect weapon because their efficiency allows the jihadists to lure the United States into other countries and, with sufficient manipulation, can increase the number of innocents who are killed.
  • In this sort of war, the problem of killing innocents is practical. It undermines the strategic effort. The argument that it is illegal is dubious, and to my mind, so is the argument that it is immoral. The argument that it is ineffective in achieving U.S. strategic goals of eliminating the threat of terrorist actions by jihadists is my point.
  • The broader the engagement, the greater the perception of U.S. hostility to Islam, the easier the recruitment until the jihadist forces reach a size that can't be dealt with by isolated airstrikes.
  • In warfare, enemies will try to get you to strike at what they least mind losing. The case against strikes by unmanned aerial vehicles is not that they are ineffective against specific targets but that the targets are not as vital as the United States thinks. The United States believes that the destruction of the leadership is the most efficient way to destroy the threat of the jihadist movement. In fact it only mitigates the threat while new leadership emerges. The strength of the jihadist movement is that it is global, sparse and dispersed. It does not provide a target whose destruction weakens the movement. However, the jihadist movement's weakness derives from its strength: It is limited in what it can do and where.     
  • In the long run, it is not clear that the cost is so little. A military strategy to defeat the jihadists is impossible. At its root, the real struggle against the jihadists is ideological, and that struggle simply cannot be won with Hellfire missiles.
  •  
    "Airstrikes by unmanned aerial vehicles have become a matter of serious dispute lately. The controversy focuses on the United States, which has the biggest fleet of these weapons and which employs them more frequently than any other country. On one side of this dispute are those who regard them simply as another weapon of war whose virtue is the precision with which they strike targets. On the other side are those who argue that in general, unmanned aerial vehicles are used to kill specific individuals, frequently civilians, thus denying the targeted individuals their basic right to some form of legal due process."
  •  
    I'm starting to come around to the objections of expeditionary troops trying to put down the American colonial revolt. There's something to having to look someone in the face when you kill them.
anonymous

Is Organic Food Really The Same As Conventional? - 0 views

  • Despite what organic zealots are telling you, this wasn’t a bad study. It was a meta-analysis that examined a number of relevant health measures comparing organic versus conventionally grown foods over the last several decades.
  • One problem is that the word “organic” is a huge umbrella that includes sustainable, biodynamic farming practices as well as huge-scale industrial operations that barely squeeze under the “certified organic” labeling standards. As a result there is a tremendous amount of heterogeneity (a scientific word for a wide range of differences) between the organic foods being tested, as well as the types of studies that are performed. As a result, it is difficult to measure consistent differences (aka statistical significance) between organic and conventional foods in this kind of study. Unfortunately, this doesn’t do much to further our understanding of how growing practices affect health.
  • The huge variance among farming practices that fit under the organic umbrella is not trivial.
  • ...7 more annotations...
  • Large organic farms are typically monoculture fields just like large conventional farms, though more crop rotation is required. Industrial organic poultry and beef farms also look oddly similar to conventional industrial feedlots, even if the animals are eating organic feed. In fact, both organic and conventional industrial farms are often owned by the same mega-corporations, and share the same bottom line of profit. There’s no reason to suspect that these industrial organic foods would be markedly more nutritious than conventionally grown foods.
  • Interestingly, despite the wide range in the quality of foods that qualify as organic, the Stanford study did find some significant differences. Organic produce contained significantly more phenols, the cancer fighting chemicals found in red wine, green tea, chocolate and many fruits and vegetables. However, this finding was glossed over in favor of the non-significant differences found between vitamin C, betacarotene and vitamin E levels in organic versus conventional foods.
  • Soil quality and weather (the raw ingredients) are by far the biggest factors in the nutrient levels of produce, with freshness and storage methods being next in line.
  • Indeed, organic agriculture typically has more minerals and the Stanford team confirmed they contain significantly more phosphorus. But there is so much variety among plants, and from season to season, that you shouldn’t necessarily expect large, consistent differences in the levels of common vitamins like C and E from genetically identical plants.
  • The Stanford study confirms organic agriculture has substantially fewer pesticide contaminations, but for some reason this finding was also glossed over since the conventional produce levels “didn’t exceed maximum allowed limits.” Logically, however, if limiting pesticide exposure is important to you (as it should be) organic produce is the better option.
  • The animal studies were even more encouraging. Small but significant improvements in fatty acid profiles were found for organic milk and chickens, which contained more healthy omega-3 fatty acids. More importantly, antibiotic resistant bacteria, the kind that are becoming more common (and deadly) in our own hospitals, were 33% more likely to be found on conventional meat products than on organic meat.
  • From this study it seems reasonable to conclude that organics, even industrial organics, are superior to conventional foods in some ways.
  •  
    "On Monday a study from scientists at Stanford made headlines by concluding that there isn't much health value in choosing organic food over conventional food. The headline didn't surprise me in the least, I've seen similar ones at least a dozen times before, but there is still so much confusion among the general public around this topic that it's worth revisiting in the wake of this new data."
anonymous

2011 Wisconsin Crash Calendar & Interview - 0 views

  • I love this infographic design!  Designed by Joni Graves, a Program Director at the University of Wisconsin-Madison Department of Engineering Professional Development (that’s a mouthful!).  I highly recommend downloading the PDF version and taking a closer look on your own.
  •  
    The Wisconsin Bureau of Transportation Safety (BOTS) uses printed copies of the infographic calendar at meetings around the state with various groups to generate discussions about what causes crashes and how to interpret what the data shows.
‹ Previous 21 - 40 of 113 Next › Last »
Showing 20 items per page