Skip to main content

Home/ Long Game/ Group items tagged technology

Rss Feed Group items tagged

anonymous

How people read online: Why you won't finish this article. - 1 views

  • For every 161 people who landed on this page, about 61 of you—38 percent—are already gone.
  • We’re at the point in the page where you have to scroll to see more. Of the 100 of you who didn’t bounce, five are never going to scroll.
  • You’re tweeting a link to this article already? You haven’t even read it yet! What if I go on to advocate something truly awful, like a constitutional amendment requiring that we all type two spaces after a period?
  • ...23 more annotations...
  • Only a small number of you are reading all the way through articles on the Web.
  • Schwartz’s data shows that readers can’t stay focused. The more I type, the more of you tune out. And it’s not just me. It’s not just Slate. It’s everywhere online. When people land on a story, they very rarely make it all the way down the page. A lot of people don’t even make it halfway.
  • Even more dispiriting is the relationship between scrolling and sharing. Schwartz’s data suggest that lots of people are tweeting out links to articles they haven’t fully read. If you see someone recommending a story online, you shouldn’t assume that he has read the thing he’s sharing.
  • OK, we’re a few hundred words into the story now. According to the data, for every 100 readers who didn’t bounce up at the top, there are about 50 who’ve stuck around. Only one-half!
  • Take a look at the following graph created by Schwartz, a histogram showing where people stopped scrolling in Slate articles.
  • A typical Web article is about 2000 pixels long.
  • There’s a spike at 0 percent—i.e., the very top pixel on the page—because 5 percent of readers never scrolled deeper than that spot.
  • Finally, the spike near the end is an anomaly caused by pages containing photos and videos—on those pages, people scroll through the whole page.)
  • Or look at John Dickerson’s fantastic article about the IRS scandal or something. If you only scrolled halfway through that amazing piece, you would have read just the first four paragraphs. Now, trust me when I say that beyond those four paragraphs, John made some really good points about whatever it is his article is about, some strong points that—without spoiling it for you—you really have to read to believe. But of course you didn’t read it because you got that IM and then you had to look at a video and then the phone rang …
  • do you know what you get on a typical Slate page if you never scroll? Bupkis.
  • Schwarz’s histogram for articles across lots of sites is in some ways more encouraging than the Slate data, but in other ways even sadder:
  • On these sites, the median scroll depth is slightly greater—most people get to 60 percent of the article rather than the 50 percent they reach on Slate pages. On the other hand, on these pages a higher share of people—10 percent—never scroll. In general, though, the story across the Web is similar to the story at Slate: Few people are making it to the end, and a surprisingly large number aren’t giving articles any chance at all.
  • Chartbeat can’t directly track when individual readers tweet out links, so it can’t definitively say that people are sharing stories before they’ve read the whole thing. But Chartbeat can look at the overall tweets to an article, and then compare that number to how many people scrolled through the article.
  • Here’s Schwartz’s analysis of the relationship between scrolling and sharing on Slate pages:
  • Courtesy of Chartbeat And here’s a similar look at the relationship between scrolling and sharing across sites monitored by Chartbeat: Courtesy of Chartbeat
  • There’s a very weak relationship between scroll depth and sharing. Both at Slate and across the Web, articles that get a lot of tweets don’t necessarily get read very deeply.
  • Articles that get read deeply aren’t necessarily generating a lot of tweets.  
  • Schwartz tells me that on a typical Slate page, only 25 percent of readers make it past the 1,600th pixel of the page, and we’re way beyond that now.
  • Sure, like every other writer on the Web, I want my articles to be widely read, which means I want you to Like and Tweet and email this piece to everyone you know. But if you had any inkling of doing that, you’d have done it already. You’d probably have done it just after reading the headline and seeing the picture at the top. Nothing I say at this point matters at all.
  • So, what the hey, here are a couple more graphs, after which I promise I’ll wrap things up for the handful of folks who are still left around here. (What losers you are! Don’t you have anything else to do?) This heatmap shows where readers spend most of their time on Slate pages:
  • Schwartz told me I should be very pleased with Slate’s map, which shows that a lot of people are moved to spend a significant amount of their time below the initial scroll window of an article page.
  • Since you usually have to scroll below the fold to see just about any part of an article, Slate’s below-the-fold engagement looks really great. But if articles started higher up on the page, it might not look as good. In other words: Ugh.
  • Maybe this is just our cultural lot: We live in the age of skimming. I want to finish the whole thing, I really do. I wish you would, too. Really—stop quitting! But who am I kidding. I’m busy. You’re busy. There’s always something else to read, watch, play, or eat. OK, this is where I’d come up with some clever ending. But who cares? You certainly don’t. Let’s just go with this: Kicker TK.
  •  
    "Schwartz's data shows that readers can't stay focused. The more I type, the more of you tune out. And it's not just me. It's not just Slate. It's everywhere online. When people land on a story, they very rarely make it all the way down the page. A lot of people don't even make it halfway. Even more dispiriting is the relationship between scrolling and sharing. Schwartz's data suggest that lots of people are tweeting out links to articles they haven't fully read. If you see someone recommending a story online, you shouldn't assume that he has read the thing he's sharing."
anonymous

Achieving Techno-Literacy - 0 views

  • • Every new technology will bite back. The more powerful its gifts, the more powerfully it can be abused. Look for its costs. • Technologies improve so fast you should postpone getting anything you need until the last second. Get comfortable with the fact that anything you buy is already obsolete. • Before you can master a device, program or invention, it will be superseded; you will always be a beginner. Get good at it. • Be suspicious of any technology that requires walls. If you can fix it, modify it or hack it yourself, that is a good sign. • The proper response to a stupid technology is to make a better one, just as the proper response to a stupid idea is not to outlaw it but to replace it with a better idea. • Every technology is biased by its embedded defaults: what does it assume? • Nobody has any idea of what a new invention will really be good for. The crucial question is, what happens when everyone has one? • The older the technology, the more likely it will continue to be useful. • Find the minimum amount of technology that will maximize your options.
  •  
    "Technology will change faster than we can teach it. My son studied the popular programming language C++ in his home-school year; that knowledge could be economically useless soon. The accelerating pace of technology means his eventual adult career does not exist yet. Of course it won't be taught in school. But technological smartness can be. Here is the kind of literacy that we tried to impart:" By Kevin Kelly at The New York Times on September 16, 2010.
anonymous

A Brief History of the Corporation: 1600 to 2100 - 1 views

  • In its 400+ year history, the corporation has achieved extraordinary things, cutting around-the-world travel time from years to less than a day, putting a computer on every desk, a toilet in every home (nearly) and a cellphone within reach of every human.  It even put a man on the Moon and kinda-sorta cured AIDS.
  • The Age of Corporations is coming to an end. The traditional corporation won’t vanish, but it will cease to be the center of gravity of economic life in another generation or two.  They will live on as religious institutions do today, as weakened ghosts of more vital institutions from centuries ago.
  • this post is mostly woven around ideas drawn from five books that provide appropriate fuel for this business-first frame. I will be citing, quoting and otherwise indirectly using these books over several future posts
  • ...73 more annotations...
  • For a long time, I was misled by the fact that 90% of the available books frame globalization and the emergence of modernity in terms of the nation-state as the fundamental unit of analysis, with politics as the fundamental area of human activity that shapes things.
  • But the more I’ve thought about it, the more I’ve been pulled towards a business-first perspective on modernity and globalization.
  • The human world, like physics, can be reduced to four fundamental forces: culture, politics, war and business.
  • Culture is the most mysterious, illegible and powerful force.
  • But one quality makes gravity dominate at large space-time scales: gravity affects all masses and is always attractive, never repulsive.  So despite its weakness, it dominates things at sufficiently large scales. I don’t want to stretch the metaphor too far, but something similar holds true of business.
  • On the scale of days or weeks, culture, politics and war matter a lot more in shaping our daily lives.
  • Business though, as an expression of the force of unidirectional technological evolution, has a destabilizing unidirectional effect. It is technology, acting through business and Schumpeterian creative-destruction, that drives monotonic, historicist change, for good or bad. Business is the locus where the non-human force of technological change sneaks into the human sphere.
  • Culture is suspicious of technology. Politics is mostly indifferent to and above it. War-making uses it, but maintains an arms-length separation.
  • Business? It gets into bed with it. It is sort of vaguely plausible that you could switch artists, politicians and generals around with their peers from another age and still expect them to function. But there is no meaningful way for a businessman from (say) 2000 BC to comprehend what Mark Zuckerberg does, let alone take over for him. Too much magical technological water has flowed under the bridge.
  • It is business that creates the world of magic, not technology itself. And the story of business in the last 400 years is the story of the corporate form.
  • There are some who treat corporate forms as yet another technology (in this case a technology of people-management), but despite the trappings of scientific foundations (usually in psychology) and engineering synthesis (we speak of organizational “design”), the corporate form is not a technology.  It is the consequence of a social contract like the one that anchors nationhood. It is a codified bundle of quasi-religious beliefs externalized into an animate form that seeks to preserve itself like any other living creature.
  • What was new was the idea of a publicly traded joint-stock corporation, an entity with rights similar to those of states and individuals, with limited liability and significant autonomy
  • two important points about this evolution of corporations.
  • The first point is that the corporate form was born in the era of Mercantilism, the economic ideology that (zero-sum) control of land is the foundation of all economic power.
  • In politics, Mercantilism led to balance-of-power models.
  • In business, once the Age of Exploration (the 16th century) opened up the world, it led to mercantilist corporations focused on trade
  • The forces of radical technological change — the Industrial Revolution — did not seriously kick until after nearly 200 years of corporate evolution (1600-1800) in a mercantilist mold.
  • Smith was both the prophet of doom for the Mercantilist corporation, and the herald of what came to replace it: the Scumpeterian corporation.
  • The corporate form therefore spent almost 200 years — nearly half of its life to date — being shaped by Mercantilist thinking, a fundamentally zero-sum way of viewing the world.
  • It was not until after the American Civil War and the Gilded Age that businesses fundamentally reorganized around (as we will see) time instead of space, which led, as we will see, to a central role for ideas and therefore the innovation function.
  • The Black Hills Gold Rush of the 1870s, the focus of the Deadwood saga, was in a way the last hurrah of Mercantilist thinking. William Randolph Hearst, the son of gold mining mogul George Hearst who took over Deadwood in the 1870s, made his name with newspapers. The baton had formally been passed from mercantilists to schumpeterians.
    • anonymous
       
      So, Mercantilism was about colonizing space. Corporatism is about colonizing time. This is a pretty useful (though arguably too-reductionist) way to latch on to the underpinning of later thoughts.
  • This divide between the two models can be placed at around 1800, the nominal start date of the Industrial Revolution, as the ideas of Renaissance Science met the energy of coal to create a cocktail that would allow corporations to colonize time.
  • The second thing to understand about the evolution of the corporation is that the apogee of power did not coincide with the apogee of reach.
  • for America, corporations employed less than 20% of the population in 1780, and over 80% in 1980, and have been declining since
  • Certainly corporations today seem far more powerful than those of the 1700s, but the point is that the form is much weaker today, even though it has organized more of our lives. This is roughly the same as the distinction between fertility of women and population growth: the peak in fertility (a per-capita number) and peak in population growth rates (an aggregate) behave differently.
  • a useful 3-phase model of the history of the corporation: the Mercantilist/Smithian era from 1600-1800, the Industrial/Schumpeterian era from 1800 – 2000 and finally, the era we are entering, which I will dub the Information/Coasean era
    • anonymous
       
      I think it would be useful to map these eras against the backdrop of my previously established Generational timeline (as well as the StratFor 50-year cycle breakdown) in order to see if there are any self-supporting model elements.
  • By a happy accident, there is a major economist whose ideas help fingerprint the economic contours of our world: Ronald Coase.
  • To a large extent, the history of the first 200 years of corporate evolution is the history of the East India Company. And despite its name and nation of origin, to think of it as a corporation that helped Britain rule India is to entirely misunderstand the nature of the beast.
  • Two images hint at its actual globe-straddling, 10x-Walmart influence: the image of the Boston Tea Partiers dumping crates of tea into the sea during the American struggle for independence, and the image of smoky opium dens in China. One image symbolizes the rise of a new empire. The other marks the decline of an old one.
  • At a broader level, the EIC managed to balance an unbalanced trade equation between Europe and Asia whose solution had eluded even the Roman empire.
  • For this scheme to work, three foreground things and one background thing had to happen: the corporation had to effectively take over Bengal (and eventually all of India), Hong Kong (and eventually, all of China, indirectly) and England.
  • The background development was simpler. England had to take over the oceans and ensure the safe operations of the EIC.
  • eventually, as the threat from the Dutch was tamed, it became clear that the company actually had more firepower at its disposal than most of the nation-states it was dealing with. The realization led to the first big domino falling, in the corporate colonization of India, at the battle of Plassey.
  • The EIC was the original too-big-to-fail corporation. The EIC was the beneficiary of the original Big Bailout. Before there was TARP, there was the Tea Act of 1773 and the Pitt India Act of 1783. The former was a failed attempt to rein in the EIC, which cost Britain the American Colonies.  The latter created the British Raj as Britain doubled down in the east to recover from its losses in the west. An invisible thread connects the histories of India and America at this point. Lord Cornwallis, the loser at the Siege of Yorktown in 1781 during the revolutionary war, became the second Governor General of India in 1786.
  • But these events were set in motion over 30 years earlier, in the 1750s. There was no need for backroom subterfuge.  It was all out in the open because the corporation was such a new beast, nobody really understood the dangers it represented.
  • there was nothing preventing its officers like Clive from simultaneously holding political appointments that legitimized conflicts of interest. If you thought it was bad enough that Dick Cheney used to work for Halliburton before he took office, imagine if he’d worked there while in office, with legitimate authority to use his government power to favor his corporate employer and make as much money on the side as he wanted, and call in the Army and Navy to enforce his will. That picture gives you an idea of the position Robert Clive found himself in, in 1757.
  • The East India bubble was a turning point.
  • Over the next 70 years, political, military and economic power were gradually separated and modern checks and balances against corporate excess came into being.
  • It is not too much of a stretch to say that for at least a century and a half, England’s foreign policy was a dance in Europe in service of the EIC’s needs on the oceans.
  • Mahan’s book is the essential lens you need to understand the peculiar military conditions in the 17th and 18th centuries that made the birth of the corporation possible.)
  • The 16th century makes a vague sort of sense as the “Age of Exploration,” but it really makes a lot more sense as the startup/first-mover/early-adopter phase of the corporate mercantilism. The period was dominated by the daring pioneer spirit of Spain and Portugal, which together served as the Silicon Valley of Mercantilism. But the maritime business operations of Spain and Portugal turned out to be the MySpace and Friendster of Mercantilism: pioneers who could not capitalize on their early lead.
  • Conventionally, it is understood that the British and the Dutch were the ones who truly took over. But in reality, it was two corporations that took over: the EIC and the VOC (the Dutch East India Company,  Vereenigde Oost-Indische Compagnie, founded one year after the EIC) the Facebook and LinkedIn of Mercantile economics respectively. Both were fundamentally more independent of the nation states that had given birth to them than any business entities in history. The EIC more so than the VOC.  Both eventually became complex multi-national beasts.
  • arguably, the doings of the EIC and VOC on the water were more important than the pageantry on land.  Today the invisible web of container shipping serves as the bloodstream of the world. Its foundations were laid by the EIC.
    • anonymous
       
      There was an excellent episode of the original Connections series that pointed this out, specifically focusing on the Dutch boats and the direct line to container ships and 747 cargo planes.
  • A new idea began to take its place in the early 19th century: the Schumpeterian corporation that controlled, not trade routes, but time. It added the second of the two essential Druckerian functions to the corporation: innovation.
  • I call this the “most misleading table in the world.”
  • corporations and nations may have been running on Mercantilist logic, but the undercurrent of Schumpeterian growth was taking off in Europe as early as 1500 in the less organized sectors like agriculture. It was only formally recognized and tamed in the early 1800s, but the technology genie had escaped.
  • The action shifted to two huge wildcards in world affairs of the 1800s: the newly-born nation of America and the awakening giant in the east, Russia. Per capita productivity is about efficient use of human time. But time, unlike space, is not a collective and objective dimension of human experience. It is a private and subjective one. Two people cannot own the same piece of land, but they can own the same piece of time.  To own space, you control it by force of arms. To own time is to own attention. To own attention, it must first be freed up, one individual stream of consciousness at a time.
  • The Schumpeterian corporation was about colonizing individual minds. Ideas powered by essentially limitless fossil-fuel energy allowed it to actually pull it off.
  • it is probably reaosonably safe to treat the story of Schumpeterian growth as an essentially American story.
  • In many ways the railroads solved a vastly speeded up version of the problem solved by the EIC: complex coordination across a large area.  Unlike the EIC though, the railroads were built around the telegraph, rather than postal mail, as the communication system. The difference was like the difference between the nervous systems of invertebrates and vertebrates.
  • If the ship sailing the Indian Ocean ferrying tea, textiles, opium and spices was the star of the mercantilist era, the steam engine and steamboat opening up America were the stars of the Schumpeterian era.
  • The primary effect of steam was not that it helped colonize a new land, but that it started the colonization of time. First, social time was colonized. The anarchy of time zones across the vast expanse of America was first tamed by the railroads for the narrow purpose of maintaining train schedules, but ultimately, the tools that served to coordinate train schedules: the mechanical clock and time zones, served to colonize human minds.  An exhibit I saw recently at the Union Pacific Railroad Museum in Omaha clearly illustrates this crucial fragment of history:
  • For all its sophistication, the technology of sail was mostly a very-refined craft, not an engineering discipline based on science.
  • Steam power though was a scientific and engineering invention.
  • Scientific principles about gases, heat, thermodynamics and energy applied to practical ends, resulting in new artifacts. The disempowerment of craftsmen would continue through the Schumpeterian age, until Fredrick Taylor found ways to completely strip mine all craft out of the minds of craftsmen, and put it into machines and the minds of managers.
  • It sounds awful when I put it that way, and it was, in human terms, but there is no denying that the process was mostly inevitable and that the result was vastly better products.
  • The Schumpeterian corporation did to business what the doctrine of Blitzkrieg would do to warfare in 1939: move humans at the speed of technology instead of moving technology at the speed of humans.
  • Blitzeconomics allowed the global economy to roar ahead at 8% annual growth rates instead of the theoretical 0% average across the world for Mercantilist zero-sum economics. “Progress” had begun.
  • Two phrases were invented to name the phenomenon: productivity meant shrinking autonomously-owned time. Increased standard of living through time-saving devices became code for the fact that the “freed up” time through “labor saving” devices was actually the de facto property of corporations. It was a Faustian bargain.
  • Many people misunderstood the fundamental nature of Schumpeterian growth as being fueled by ideas rather than time. Ideas fueled by energy can free up time which can then partly be used to create more ideas to free up more time. It is a positive feedback cycle,  but with a limit. The fundamental scarce resource is time. There is only one Earth worth of space to colonize. Only one fossil-fuel store of energy to dig out. Only 24 hours per person per day to turn into capitive attention.
  • Then the Internet happened, and we discovered the ability to mine time as fast as it could be discovered in hidden pockets of attention. And we discovered limits. And suddenly a new peak started to loom: Peak Attention.
  • There is certainly plenty of energy all around (the Sun and the wind, to name two sources), but oil represents a particularly high-value kind. Attention behaves the same way.
  • Take an average housewife, the target of much time mining early in the 20th century. It was clear where her attention was directed. Laundry, cooking, walking to the well for water, cleaning, were all obvious attention sinks. Washing machines, kitchen appliances, plumbing and vacuum cleaners helped free up a lot of that attention, which was then immediately directed (as corporate-captive attention) to magazines and television.
  • The point isn’t that we are running out of attention. We are running out of the equivalent of oil: high-energy-concentration pockets of easily mined fuel.
  • There is a lot more money to be made in replacing hand-washing time with washing-machine plus magazine time, than there is to be found in replacing one hour of TV with a different hour of TV.
  • . To get to Clay Shirky’s hypothetical notion of cognitive surplus, we need Alternative Attention sources. To put it in terms of per-capita productivity gains, we hit a plateau.
  • When Asia hits Peak Attention (America is already past it, I believe), absolute size, rather than big productivity differentials, will again define the game, and the center of gravity of economic activity will shift to Asia.
  • Once again, it is the oceans, rather than land, that will become the theater for the next act of the human drama. While American lifestyle designers are fleeing to Bali, much bigger things are afoot in the region. And when that shift happens, the Schumpeterian corporation, the oil rig of human attention, will start to decline at an accelerating rate. Lifestyle businesses and other oddball contraptions — the solar panels and wind farms of attention economics — will start to take over.
  • It will be the dawn of the age of Coasean growth.
  • Coasean growth is not measured in terms of national GDP growth. That’s a Smithian/Mercantilist measure of growth. It is also not measured in terms of 8% returns on the global stock market.  That is a Schumpeterian growth measure. For that model of growth to continue would be a case of civilizational cancer (“growth for the sake of growth is the ideology of the cancer cell” as Edward Abbey put it).
  • Coasean growth is fundamentally not measured in aggregate terms at all. It is measured in individual terms. An individual’s income and productivity may both actually decline, with net growth in a Coasean sense.
  • How do we measure Coasean growth? I have no idea. I am open to suggestions. All I know is that the metric will need to be hyper-personalized and relative to individuals rather than countries, corporations or the global economy. There will be a meaningful notion of Venkat’s rate of Coasean growth, but no equivalent for larger entities.
  • The fundamental scarce resource that Coasean growth discovers and colonizes is neither space, nor time. It is perspective.
  •  
    This is a lay friendly, amateur, mental exploration of the Corporation. It's also utterly absorbing and comes with the usual collection of caveats that we amateurs are accustomed to rattling off when we dunk ourselves into issues much bigger than ourselves. Thanks to BoingBoing, via Futurismic, for the pointer: http://www.boingboing.net/2011/06/23/a-brief-history-of-t.html http://futurismic.com/2011/06/22/a-brief-history-of-the-corporation-1600-to-2100/ "The year was 1772, exactly 239 years ago today, the apogee of power for the corporation as a business construct. The company was the British East India company (EIC). The bubble that burst was the East India Bubble. Between the founding of the EIC in 1600 and the post-subprime world of 2011, the idea of the corporation was born, matured, over-extended, reined-in, refined, patched, updated, over-extended again, propped-up and finally widely declared to be obsolete. Between 2011 and 2100, it will decline - hopefully gracefully - into a well-behaved retiree on the economic scene."
anonymous

How the internet is making us poor - Quartz - 2 views

  • Sixty percent of the jobs in the US are information-processing jobs, notes Erik Brynjolfsson, co-author of a recent book about this disruption, Race Against the Machine. It’s safe to assume that almost all of these jobs are aided by machines that perform routine tasks. These machines make some workers more productive. They make others less essential.
  • The turn of the new millennium is when the automation of middle-class information processing tasks really got under way, according to an analysis by the Associated Press based on data from the Bureau of Labor Statistics. Between 2000 and 2010, the jobs of 1.1 million secretaries were eliminated, replaced by internet services that made everything from maintaining a calendar to planning trips easier than ever.
  • Economist Andrew McAfee, Brynjolfsson’s co-author, has called these displaced people “routine cognitive workers.” Technology, he says, is now smart enough to automate their often repetitive, programmatic tasks. ”We are in a desperate, serious competition with these machines,” concurs Larry Kotlikoff, a professor of economics at Boston University. “It seems like the machines are taking over all possible jobs.”
  • ...23 more annotations...
  • In the early 1800′s, nine out of ten Americans worked in agriculture—now it’s around 2%. At its peak, about a third of the US population was employed in manufacturing—now it’s less than 10%. How many decades until the figures are similar for the information-processing tasks that typify rich countries’ post-industrial economies?
  • To see how the internet has disproportionately affected the jobs of people who process information, check out the gray bars dipping below the 0% line on the chart, below. (I’ve adapted this chart to show just the types of employment that lost jobs in the US during the great recession. Every other category continued to add jobs or was nearly flat.)
  • Here’s another clue about what’s been going on in the past ten years. “Return on capital” measures the return firms get when they spend money on capital goods like robots, factories, software—anything aside from people. (If this were a graph of return on people hired, it would be called “Return on labor”.)
  • Notice: the only industry where the return on capital is as great as manufacturing is “other industries”—a grab bag which includes all the service and information industries, as well as entertainment, health care and education. In short, you don’t have to be a tech company for investing in technology to be worthwhile.
  • For many years, the question of whether or not spending on information technology (IT) made companies more productive was highly controversial. Many studies found that IT spending either had no effect on productivity or was even counter-productive. But now a clear trend is emerging. More recent studies show that IT—and the organizational changes that go with it—are doing firms, especially multinationals (pdf), a great deal of good.
  • Winner-take-all and the power of capital to exacerbate inequality
  • One thing all our machines have accomplished, and especially the internet, is the ability to reproduce and distribute good work in record time. Barring market distortions like monopolies, the best software, media, business processes and, increasingly, hardware, can be copied and sold seemingly everywhere at once. This benefits “superstars”—the most skilled engineers or content creators. And it benefits the consumer, who can expect a higher average quality of goods.
  • But it can also exacerbate income inequality, says Brynjolfsson. This contributes to a phenomenon called “skill-biased technological [or technical] change.” “The idea is that technology in the past 30 years has tended to favor more skilled and educated workers versus less educated workers,” says Brynjolfsson. “It has been a complement for more skilled workers. It makes their labor more valuable. But for less skilled workers, it makes them less necessary—especially those who do routine, repetitive tasks.”
  • “Certainly the labor market has never been better for very highly-educated workers in the United States, and when I say never, I mean never,” MIT labor economist David Autor told American Public Media’s Marketplace.
  • The other winners in this scenario are anyone who owns capital.
  • As Paul Krugman wrote, “This is an old concern in economics; it’s “capital-biased technological change”, which tends to shift the distribution of income away from workers to the owners of capital.”
  • Computers are more disruptive than, say, the looms smashed by the Luddites, because they are “general-purpose technologies” noted Peter Linert, an economist at University of Californa-Davis.
  • “The spread of computers and the Internet will put jobs in two categories,” said Andreessen. “People who tell computers what to do, and people who are told by computers what to do.” It’s a glib remark—but increasingly true.
  • In March 2009, Amazon acquired Kiva Systems, a warehouse robotics and automation company. In partnership with a company called Quiet Logistics, Kiva’s combination of mobile shelving and robots has already automated a warehouse in Andover, Massachusetts.
  • This time it’s fasterHistory is littered with technological transitions. Many of them seemed at the time to threaten mass unemployment of one type of worker or another, whether it was buggy whip makers or, more recently, travel agents. But here’s what’s different about information-processing jobs: The takeover by technology is happening much faster.
  • From 2000 to 2007, in the years leading up to the great recession, GDP and productivity in the US grew faster than at any point since the 1960s, but job creation did not keep pace.
  • Brynjolfsson thinks he knows why: More and more people were doing work aided by software. And during the great recession, employment growth didn’t just slow. As we saw above, in both manufacturing and information processing, the economy shed jobs, even as employment in the service sector and professional fields remained flat.
  • Especially in the past ten years, economists have seen a reversal of what they call “the great compression“—that period from the second world war through the 1970s when, in the US at least, more people were crowded into the ranks of the middle class than ever before.
  • There are many reasons why the economy has reversed this “compression,” transforming into an “hourglass economy” with many fewer workers in the middle class and more at either the high or the low end of the income spectrum.
  • The hourglass represents an income distribution that has been more nearly the norm for most of the history of the US. That it’s coming back should worry anyone who believes that a healthy middle class is an inevitable outcome of economic progress, a mainstay of democracy and a healthy society, or a driver of further economic development.
    • anonymous
       
      This is the meaty center. It's what I worry about. The "Middle Class" may just be an anomaly.
  • Indeed, some have argued that as technology aids the gutting of the middle class, it destroys the very market required to sustain it—that we’ll see “less of the type of innovation we associate with Steve Jobs, and more of the type you would find at Goldman Sachs.”
  • So how do we deal with this trend? The possible solutions to the problems of disruption by thinking machines are beyond the scope of this piece. As I’ve mentioned in other pieces published at Quartz, there are plenty of optimists ready to declare that the rise of the machines will ultimately enable higher standards of living, or at least forms of unemployment as foreign to us as “big data scientist” would be to a scribe of the 17th century.
  • But that’s only as long as you’re one of the ones telling machines what to do, not being told by them. And that will require self-teaching, creativity, entrepreneurialism and other traits that may or may not be latent in children, as well as retraining adults who aspire to middle class living. For now, sadly, your safest bet is to be a technologist and/or own capital, and use all this automation to grab a bigger-than-ever share of a pie that continues to expand.
  •  
    "Everyone knows the story of how robots replaced humans on the factory floor. But in the broader sweep of automation versus labor, a trend with far greater significance for the middle class-in rich countries, at any rate-has been relatively overlooked: the replacement of knowledge workers with software. One reason for the neglect is that this trend is at most thirty years old, and has become apparent in economic data only in perhaps the past ten years. The first all-in-one commercial microprocessor went on sale in 1971, and like all inventions, it took decades for it to become an ecosystem of technologies pervasive and powerful enough to have a measurable impact on the way we work."
anonymous

Tools Never Die, the Finale - 0 views

  • So what Kevin found is not exactly what I asked him to find; the original tool is no longer being made, but the idea, the concept, lives on in new, adaptive forms. Was that our bet? "Remember," he wrote me a little defensively," I did not say 'no technological device' but rather 'no species of technology' [has disappeared] so my emphasis is on the underlying technology rather than the physical device."
  • But the deeper lesson of this whole exercise is that — to a degree I didn't appreciate until Kevin forced me to look — technology does indeed persist. Tools, machines, they change, they adapt, they morph, but they continue to be made. I hadn't noticed this tenaciousness before.
  • Kevin would go further. He has a radical notion, and he talks about it in his book What Technology Wants. He says most living things eventually go extinct. But technology, perhaps, is immortal.
  • ...4 more annotations...
  • Also when comparing tools to life, the time scales are ridiculously different. Trilobites ranged the Earth for 270 million years. The Paleolithic axe is an infant by comparison, merely 100,000 years old. The homo sapiens who made that axe are only a 200,000 years old. Who's to say that our ideas won't vanish long before the trilobites did?
  • Ideas, what do they use? Not chemicals. Richard Dawkins says they leap from, "brain to brain, via a process which, in the broad sense, can be called imitation." People see a new invention, then they tell friends about it, or they put it onto a cave wall, papyrus, into song, or a book, newspaper, radio, TV, movies, poems, the internet. That way, the invention can be stored and copied.
  • Or is it possible that technology is inherently persistant, that it just won't be thrown out? That's what Kevin is suggesting. That's "What Technology Wants." It "wants" to be copied, to last. I find this idea a bit too mystical for my tastes.
  • "I don't know about you, but I am not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people's ideas renew themselves, before sending out copies of themselves in an informational diaspora...Who's in charge, according to this vision — we or our memes?
  •  
    "A few weeks ago, Kevin, founding editor of Wired Magazine and world-class gadget geek, made me this bet: I bet, he said, "there is no species of technology that's gone globally extinct on this planet." By which he meant - or I took him to mean - there is no tool, no invention ever manufactured by humans that isn't still being made new today."
anonymous

The Technium: Bootstrapping the Industrial Age - 0 views

  • In February 1942, R. Bradley,  a British Officer in the Royal artillery in World War II was captured and then held prisoner by Japanese in Singapore. Their camp was remote, supplies were almost non-existent, and they were treated roughly as POWs; when they rebelled they were locked in a confinement shed without food.
  • But they were tinkerers, too. Together with some other POWs in his camp, Bradley stole hand tools from the Japanese soldiers and from these bits and pieces he transformed scrap metal into a miniature lathe.
  • It was tiny enough to be kept a secret, big enough to be useful.
  • ...16 more annotations...
  • The lathe was a tool-making egg; it was used to manufacture more sophisticated items.
  • During the two years of their interment the lathe remade the tools -- like taps and dies -- which were first used to create it. A lather has those self-reproductive qualities.
  • Over years of tinkering, Gingery was able to bootstrap a full-bore machine shop from alley scraps. He made rough tools that made better tools, which then made tools good enough to make real stuff.
  • Gingery began with a simple backyard foundry. This was a small 5-gallon bucket packed with sand.
  • In its center was a coffee can of smoldering BBQ charcoal. Inside the can of charcoal was a small ceramic crucible into which he threw scrap aluminum – cans, etc. Gingery forced air into this crude furnace via a fan, burning the charcoal with enough heat to melt the aluminum. He poured the molten metal into a mold of wet sand carved out in the shape he wanted. When the cast was cool he had a workable metal holding plate, which became the heart of a homemade lathe. Other lathe parts were cast. He finished these rough parts with hand tools. His one “cheat” was adding a used electric motor – although it is not impossible to imagine a wind or water powered version.
  • When the rough lathe was up and running he used it to turn out the parts for a drill press. With the drill press and lathe operating he constantly reworked pieces of the lathe itself, replacing parts with improved versions. In this way, his tiny machine shop was an upcreation device, capable of generating higher a machine of precision than itself.
  • Gingery recapitulated the evolution of technology, the great pattern by which simple tools create more complex tools and so on infinitum. This expansion of upcreation power is the means by which an entire culture lifts itself out of mud by pulling up on its bootstraps.
  • Yet is it obvious this little demonstration is not pure. As a way to make your own machine tools, Gingerys’ plans are fine and dandy. He uses cast off washing machine motors and other junkyard scrap parts to grow a fairly robust machine shop. But as an example of relaunching a technological society in a kind of Robinson Crusoe maneuver – landing somewhere and starting civilization up -- it’s a cheat because in this latter game you don’t get to start with discarded aluminum cans, scavenged nuts and bolts, old electric motors and waste sheet metal.
  • To really navigate the minimum bootstrap path through the industrial web, you’d have to start with finding your own ore, mining and refining it with primitive tools, firing up bricks, rolling out sheet metal, developing screws and bolts by hand – all just to get you to the point where you’d have enough tools and materials to make the simple 5-gallon bucket foundry that Dave Gingery started with.
  • Select at random any one of the many thousands items within the reach of where you now sit. None of them could exist without many of the others around it. No technology is an island.
  • Let’s take a very sophisticated item: one web page. A web page relies on perhaps a hundred thousand other inventions, all needed for its birth and continued existence. There is no web page anywhere without the inventions of HTML code, without computer programming, without LEDs or cathode ray tubes, without solid state computer chips, without telephone lines, without long-distance signal repeaters, without electrical generators, without high-speed turbines, without stainless steel, iron smelters, and control of fire. None of these concrete inventions would exist without the elemental inventions of writing, of an alphabet, of hypertext links, of indexes, catalogs, archives, libraries and the scientific method itself. To recapitulate a web page you have to re-create all these other functions. You might as well remake modern society.
  • This is why restarting a sophisticated society after a devastating setback is so hard. Without all the adjacent items in a given ecological bundle, a single technology can have no effect
  • you need them all working to get one working
  • The conundrum of disaster relief is a testimony to this deep interdependency: one needs roads to bring petrol but petrol to clear roads, medicines to heal people, but healthy people to dispense medicines, communications to enable organization but organization to restore communications. We see the interdependent platform of technology primarily when it breaks down.
  • This is also the explanation of why we should not confuse a good clear view of the future with a short distance. We can see the perfect outlines of where technology is going, but we tend to overestimate how soon it will come. Usually the delay (in our eager eyes) is due to the invisible ecology of other needed technologies that aren’t ready yet.
    • anonymous
       
      Classic example that's relatable to nerds: Virtual Reality. In the '90's, the graphics tech wasn't close to where it needed to be. Also, ram prices and other hardware limitations (speed) made implementing it in any serious way a joke. Now, of course, the Oculus Rift is a consumer good. We don't call stuff "VR" anymore (as a buzzword), we just know we can buy a cool attachment that makes everything 3D.
  • The invention will hang suspended in the future for many years, not coming any closer the now. Then when the ignored co-technologies are in place it will appear in our lives in a sudden, with much surprise and applause for its unexpected appearance.
  •  
    "A favorite fantasy game for engineers is to imagine how they might re-invent essential technology from scratch. If you were stranded on an island, or left behind after Armageddon, and you needed to make your own blade, say, or a book, maybe a pair of working radios, what would it take to forge iron, make paper, or create electricity?"
anonymous

Ten Responses to the Technological Unemployment Problem - 1 views

  • There are many economists who still maintain that technological unemployment cannot happen
  • Since growing numbers of people won’t be able to earn money from their labor, it might make sense to just give everyone a guaranteed income whether or not they work.
  • Often this idea is characterized as socialist, and in some senses it is, but this characterization overlooks that the goal of a UBI is actually to save market capitalism.
    • anonymous
       
      The dualistic nature of this approach is quite incompatible with America's penchant for binary thinking. I think some peoples' heads might explode at the thought.
  • ...8 more annotations...
  • By taking advantage of new decentralized technologies and living as cheaply as possible, people might be able to increasingly just opt out of capitalism and consumerism entirely.
  • All resources become the common heritage of all of the inhabitants, not just a select few.” This arrangement is made possible by aggressive use of advanced technologies to create an abundance of resources and thereby negate the need for any sort of rationing.
  • put money directly in people’s hands so they can spend it and keep the market economy going. The main difference is that instead of making the income unconditional, Ford advocates doling out money according to an incentive scheme that encourages behavior society desires.
  • if we can find a way to directly upgrade human minds—such as through the use of brain-computer interfaces—then workers would be able to keep pace with technological change and readily adapt to new jobs and industries as quickly as they crop up.
  • they push for a series of common sense policy fixes, such as fixing education to better prepare people for STEM fields or reforming the patent system to mitigate drags on innovation.
  • Therefore we should try to accelerate technological progress by whatever means necessary so that we can make the painful transition as short as possible—much like tearing off a bandaid.
  • Yes, there will be less jobs available, and certainly people’s incomes will suffer, but technology will simultaneously bring down the cost of living at a fast enough rate that people will survive just fine without the need for government invention or economic restructuring.
    • anonymous
       
      Yup! And all thanks to the LP's consistent explanation for how it all works: "a miracle occurs."
  • Once AGI arrives we will have much bigger issues to contend with, such as will the human race survive being displaced as the most intelligent beings on planet Earth?
  •  
    "On the internet and in the media there has been growing discussion of technological unemployment. People are increasingly concerned that automation will displace more and more workers-that in fact there might be no turning back at this point. We may be reaching the end of work as we know it."
anonymous

Jaron Lanier: The Internet destroyed the middle class - 2 views

  • His book continues his war on digital utopianism and his assertion of humanist and individualistic values in a hive-mind world. But Lanier still sees potential in digital technology: He just wants it reoriented away from its main role so far, which involves “spying” on citizens, creating a winner-take-all society, eroding professions and, in exchange, throwing bonbons to the crowd.
  • This week sees the publication of “Who Owns the Future?,” which digs into technology, economics and culture in unconventional ways.
  • Much of the book looks at the way Internet technology threatens to destroy the middle class by first eroding employment and job security, along with various “levees” that give the economic middle stability.
  • ...55 more annotations...
  • “Here’s a current example of the challenge we face,” he writes in the book’s prelude: “At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only 13 people. Where did all those jobs disappear? And what happened to the wealth that all those middle-class jobs created?”
  • But more important than Lanier’s hopes for a cure is his diagnosis of the digital disease. Eccentric as it is, “Future” is one of the best skeptical books about the online world, alongside Nicholas Carr’s “The Shallows,” Robert Levine’s “Free Ride” and Lanier’s own “You Are Not a Gadget.”
  • One is that the number of people who are contributing to the system to make it viable is probably the same.
  • And furthermore, many people kind of have to use social networks for them to be functional besides being valuable.
  • So there’s still a lot of human effort, but the difference is that whereas before when people made contributions to the system that they used, they received formal benefits, which means not only salary but pensions and certain kinds of social safety nets. Now, instead, they receive benefits on an informal basis. And what an informal economy is like is the economy in a developing country slum. It’s reputation, it’s barter, it’s that kind of stuff.
  • Yeah, and I remember there was this fascination with the idea of the informal economy about 10 years ago. Stewart Brand was talking about how brilliant it is that people get by in slums on an informal economy. He’s a friend so I don’t want to rag on him too much. But he was talking about how wonderful it is to live in an informal economy and how beautiful trust is and all that.
  • And you know, that’s all kind of true when you’re young and if you’re not sick, but if you look at the infant mortality rate and the life expectancy and the education of the people who live in those slums, you really see what the benefit of the formal economy is if you’re a person in the West, in the developed world.
  • So Kodak has 140,000 really good middle-class employees, and Instagram has 13 employees, period. You have this intense concentration of the formal benefits, and that winner-take-all feeling is not just for the people who are on the computers but also from the people who are using them. So there’s this tiny token number of people who will get by from using YouTube or Kickstarter, and everybody else lives on hope. There’s not a middle-class hump. It’s an all-or-nothing society.
  • the person who lost his job at Kodak still has to pay rent with old-fashioned money he or she is no longer earning. He can’t pay his rent with cultural capital that’s replaced it.
  • The informal way of getting by doesn’t tide you over when you’re sick and it doesn’t let you raise kids and it doesn’t let you grow old. It’s not biologically real.
  • If we go back to the 19th century, photography was kind of born as a labor-saving device, although we don’t think of it that way.
  • And then, you know, along a similar vein at that time early audio recordings, which today would sound horrible to us, were indistinguishable between real music to people who did double blind tests and whatnot.
  • So in the beginning photography was kind of a labor saving device. And whenever you have a technological advance that’s less hassle than the previous thing, there’s still a choice to make. And the choice is, do you still get paid for doing the thing that’s easier?
  • And so you could make the argument that a transition to cars should create a world where drivers don’t get paid, because, after all, it’s fun to drive.
  • We kind of made a bargain, a social contract, in the 20th century that even if jobs were pleasant people could still get paid for them. Because otherwise we would have had a massive unemployment. And so to my mind, the right question to ask is, why are we abandoning that bargain that worked so well?
    • anonymous
       
      I think that's a worthy question considering the high-speed with which we adopt every possible technology; to hell with foresight.
  • Of course jobs become obsolete. But the only reason that new jobs were created was because there was a social contract in which a more pleasant, less boring job was still considered a job that you could be paid for. That’s the only reason it worked. If we decided that driving was such an easy thing [compared to] dealing with horses that no one should be paid for it, then there wouldn’t be all of those people being paid to be Teamsters or to drive cabs. It was a decision that it was OK to have jobs that weren’t terrible.
  • I mean, the whole idea of a job is entirely social construct. The United States was built on slave labor. Those people didn’t have jobs, they were just slaves. The idea of a job is that you can participate in a formal economy even if you’re not a baron. That there can be, that everybody can participate in the formal economy and the benefit of having everybody participate in the formal economy, there are annoyances with the formal economy because capitalism is really annoying sometimes.
  • But the benefits are really huge, which is you get a middle-class distribution of wealth and clout so the mass of people can outspend the top, and if you don’t have that you can’t really have democracy. Democracy is destabilized if there isn’t a broad distribution of wealth.
  • And then the other thing is that if you like market capitalism, if you’re an Ayn Rand person, you have to admit that markets can only function if there are customers and customers can only come if there’s a middle hump. So you have to have a broad distribution of wealth.
    • anonymous
       
      Ha ha. Ayn Rand people don't have to admit to *anything,* trust me, dude.
  • It was all a social construct to begin with, so what changed, to get to your question, is that at the turn of the [21st] century it was really Sergey Brin at Google who just had the thought of, well, if we give away all the information services, but we make money from advertising, we can make information free and still have capitalism.
  • But the problem with that is it reneges on the social contract where people still participate in the formal economy. And it’s a kind of capitalism that’s totally self-defeating because it’s so narrow. It’s a winner-take-all capitalism that’s not sustaining.
    • anonymous
       
      This makes me curious. Is he arguing that there are fewer *nodes* because the information access closes them?
  • You argue that the middle class, unlike the rich and the poor, is not a natural class but was built and sustained through some kind of intervention.
    • anonymous
       
      My understanding was that the U.S. heads of business got the nod to go ahead and start manufacturing things *other* than weapons, because our industrial capabilities weren't anhialated (sp?) relative to so many others.
  • There’s always academic tenure, or a taxi medallion, or a cosmetology license, or a pension. There’s often some kind of license or some kind of ratcheting scheme that allows people to keep their middle-class status.
  • In a raw kind of capitalism there tend to be unstable events that wipe away the middle and tend to separate people into rich and poor. So these mechanisms are undone by a particular kind of style that is called the digital open network.
  • Music is a great example where value is copied. And so once you have it, again it’s this winner-take-all thing where the people who really win are the people who run the biggest computers. And a few tokens, an incredibly tiny number of token people who will get very successful YouTube videos, and everybody else lives on hope or lives with their parents or something.
  • I guess all orthodoxies are built on lies. But there’s this idea that there must be tens of thousands of people who are making a great living as freelance musicians because you can market yourself on social media.
  • And whenever I look for these people – I mean when I wrote “Gadget” I looked around and found a handful – and at this point three years later, I went around to everybody I could to get actual lists of people who are doing this and to verify them, and there are more now. But like in the hip-hop world I counted them all and I could find about 50. And I really talked to everybody I could. The reason I mention hip-hop is because that’s where it happens the most right now.
  • The interesting thing about it is that people advertise, “Oh, what an incredible life. She’s this incredibly lucky person who’s worked really hard.” And that’s all true. She’s in her 20s, and it’s great that she’s found this success, but what this success is that she makes maybe $250,000 a year, and she rents a house that’s worth $1.1 million in L.A.. And this is all breathlessly reported as this great success.
  • And that’s good for a 20-year-old, but she’s at the very top of, I mean, the people at the very top of the game now and doing as well as what used to be considered good for a middle-class life.
    • anonymous
       
      Quite true. She's obviously not rolling in solid gold cadillacs.
  • But for someone who’s out there, a star with a billion views, that’s a crazy low expectation. She’s not even in the 1 percent. For the tiny token number of people who make it to the top of YouTube, they’re not even making it into the 1 percent.
  • The issue is if we’re going to have a middle class anymore, and if that’s our expectation, we won’t. And then we won’t have democracy.
  • I think in the total of music in America, there are a low number of hundreds. It’s really small. I wish all of those people my deepest blessings, and I celebrate the success they find, but it’s just not a way you can build a society.
  • The other problem is they would have to self-fund. This is getting back to the informal economy where you’re living in the slum or something, so you’re desperate to get out so you impress the boss man with your music skills or your basketball skills. And the idea of doing that for the whole of society is not progress. It should be the reverse. What we should be doing is bringing all the people who are in that into the formal economy. That’s what’s called development. But this is the opposite of that. It’s taking all the people from the developed world and putting them into a cycle of the developing world of the informal economy.
  • We don’t realize that our society and our democracy ultimately rest on the stability of middle-class jobs. When I talk to libertarians and socialists, they have this weird belief that everybody’s this abstract robot that won’t ever get sick or have kids or get old. It’s like everybody’s this eternal freelancer who can afford downtime and can self-fund until they find their magic moment or something.
  • The way society actually works is there’s some mechanism of basic stability so that the majority of people can outspend the elite so we can have a democracy. That’s the thing we’re destroying, and that’s really the thing I’m hoping to preserve. So we can look at musicians and artists and journalists as the canaries in the coal mine, and is this the precedent that we want to follow for our doctors and lawyers and nurses and everybody else? Because technology will get to everybody eventually.
  • I have 14-year-old kids who come to my talks who say, “But isn’t open source software the best thing in life? Isn’t it the future?” It’s a perfect thought system. It reminds me of communists I knew when growing up or Ayn Rand libertarians.
  • It’s one of these things where you have a simplistic model that suggests this perfect society so you just believe in it totally. These perfect societies don’t work. We’ve already seen hyper-communism come to tears. And hyper-capitalism come to tears. And I just don’t want to have to see that for cyber-hacker culture. We should have learned that these perfect simple systems are illusions.
  • You’re concerned with equality and a shrinking middle class. And yet you don’t seem to consider yourself a progressive or a man of the left — why not?
  • I am culturally a man on the left. I get a lot of people on the left. I live in Berkeley and everything. I want to live in a world where outcomes for people are not predetermined in advance with outcomes.
  • The problem I have with socialist utopias is there’s some kind of committees trying to soften outcomes for people. I think that imposes models of outcomes for other people’s lives. So in a spiritual sense there’s some bit of libertarian in me. But the critical thing for me is moderation. And if you let that go too far you do end up with a winner-take-all society that ultimately crushes everybody even worse. So it has to be moderated.
  • I think seeking perfection in human affairs is a perfect way to destroy them.
  • All of these things are magisterial, where the people who become involved in them tend to wish they could be the only ones.
  • Libertarians tend to think the economy can totally close its own loops, that you can get rid of government. And I ridicule that in the book. There are other people who believe that if you could get everybody to talk over social networks, if we could just cooperate, we wouldn’t need money anymore. And I recommend they try living in a group house and then they’ll see it’s not true.
    • anonymous
       
      Group House. HAH!
  • So what we have to demand of digital technology is that it not try to be a perfect system that takes over everything. That it balances the excess of the other magisteria.
  • And that is doesn’t concentrate power too much, and if we can just get to that point, then we’ll really be fine. I’m actually modest. People have been accusing me of being super-ambitious lately, but I feel like in a way I’m the most modest person in the conversation.
  • I’m just trying to avoid total dysfunction.
    • anonymous
       
      See, now I like this guy. This is like the political equivalent of aiming for the realist view in geopolitics. We separate what is likely from what is unlikely and aim not for "the best" situation, but a situation where the worst aspects have been mitigated. It's backwards thinking that both parties would have a hard time integrating into their (ughhh) brand.
  • Let’s stick with politics for one more. Is there something dissonant about the fact that the greatest fortunes in human history have been created with a system developed largely by taxpayers dollars?
  • Yeah, no kidding. I was there. I gotta say, every little step of this thing was really funded by either the military or public research agencies. If you look at something like Facebook, Facebook is adding the tiniest little rind of value over the basic structure that’s there anyway. In fact, it’s even worse than that. The original designs for networking, going back to Ted Nelson, kept track of everything everybody was pointing at so that you would know who was pointing at your website. In a way Facebook is just recovering information that was deliberately lost because of the fetish for being anonymous. That’s also true of Google.
  • I don’t hate anything about e-books or e-book readers or tablets. There’s a lot of discussion about that, and I think it’s misplaced. The problem I have is whether we believe in the book itself.
  • Books are really, really hard to write. They represent a kind of a summit of grappling with what one really has to say. And what I’m concerned with is when Silicon Valley looks at books, they often think of them as really differently as just data points that you can mush together. They’re divorcing books from their role in personhood.
    • anonymous
       
      Again, a take I rarely encounter.
  • I was in a cafe this morning where I heard some stuff I was interested in, and nobody could figure out. It was Spotify or one of these … so they knew what stream they were getting, but they didn’t know what music it was. Then it changed to other music, and they didn’t know what that was. And I tried to use one of the services that determines what music you’re listening to, but it was a noisy place and that didn’t work. So what’s supposed to be an open information system serves to obscure the source of the musician. It serves as a closed information system. It actually loses the information.
    • anonymous
       
      I have had this very thing happen to. I didn't get to have my moment of discovery. I think Google Glass is going to fix that. Hah. :)
  • And if we start to see that with books in general – and I say if – if you look at the approach that Google has taken to the Google library project, they do have the tendency to want to move things together. You see the thing decontextualized.
  • I have sort of resisted putting my music out lately because I know it just turns into these mushes. Without context, what does my music mean? I make very novel sounds, but I don’t see any value in me sharing novel sounds that are decontextualized. Why would I write if people are just going to get weird snippets that are just mushed together and they don’t know the overall position or the history of the writer or anything? What would be the point in that. The day books become mush is the day I stop writing.
  • So to realize how much better musical instruments were to use as human interfaces, it helped me to be skeptical about the whole digital enterprise. Which I think helped me be a better computer scientist, actually.
  • Sure. If you go way back I was one of the people who started the whole music-should-be-free thing. You can find the fire-breathing essays where I was trying to articulate the thing that’s now the orthodoxy. Oh, we should free ourselves from the labels and the middleman and this will be better.I believed it at the time because it sounds better, it really does. I know a lot of these musicians, and I could see that it wasn’t actually working. I think fundamentally you have to be an empiricist. I just saw that in the real lives I know — both older and younger people coming up — I just saw that it was not as good as what it had once been. So that there must be something wrong with our theory, as good as it sounded. It was really that simple.
  •  
    "Kodak employed 140,000 people. Instagram, 13. A digital visionary says the Web kills jobs, wealth -- even democracy"
anonymous

What the Luddites Really Fought Against - 0 views

  • The word “Luddite,” handed down from a British industrial protest that began 200 years ago this month, turns up in our daily language in ways that suggest we’re confused not just about technology, but also about who the original Luddites were and what being a modern one actually means.
  • Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry. Nor was the technology they attacked particularly new. Moreover, the idea of smashing machines as a form of industrial protest did not begin or end with them. In truth, the secret of their enduring reputation depends less on what they did than on the name under which they did it. You could say they were good at branding.
  • on March 11, 1811, in Nottingham, a textile manufacturing center, British troops broke up a crowd of protesters demanding more work and better wages.
  • ...3 more annotations...
  • So if the Luddites weren’t attacking the technological foundations of industry, what made them so frightening to manufacturers? And what makes them so memorable even now? Credit on both counts goes largely to a phantom.
  • In fact, no such person existed. Ludd was a fiction concocted from an incident that supposedly had taken place 22 years earlier in the city of Leicester. According to the story, a young apprentice named Ludd or Ludham was working at a stocking frame when a superior admonished him for knitting too loosely. Ordered to “square his needles,” the enraged apprentice instead grabbed a hammer and flattened the entire mechanism. The story eventually made its way to Nottingham, where protesters turned Ned Ludd into their symbolic leader.
  • People of the time recognized all the astonishing new benefits the Industrial Revolution conferred, but they also worried, as Carlyle put it in 1829, that technology was causing a “mighty change” in their “modes of thought and feeling.
  •  
    "The label now has many meanings, but when the group protested 200 years ago, technology wasn't really the enemy"
anonymous

USENIX 2011 Keynote: Network Security in the Medium Term, 2061-2561 AD - 1 views

  • if we should meet up in 2061, much less in the 26th century, you’re welcome to rib me about this talk. Because I’ll be happy to still be alive to rib.
  • The question I’m going to spin entertaining lies around is this: what is network security going to be about once we get past the current sigmoid curve of accelerating progress and into a steady state, when Moore’s first law is long since burned out, and networked computing appliances have been around for as long as steam engines?
  • a few basic assumptions about the future
  • ...82 more annotations...
  • it’s not immediately obvious that I can say anything useful about a civilization run by beings vastly more intelligent than us. I’d be like an australopithecine trying to visualize daytime cable TV.
  • The idea of an AI singularity
  • the whole idea of artificial general intelligence strikes me as being as questionable as 19th century fantasies about steam-powered tin men.
  • if you start trying to visualize a coherent future that includes aliens, telepathy, faster than light travel, or time machines, your futurology is going to rapidly run off the road and go crashing around in the blank bits of the map that say HERE BE DRAGONS.
  • at least one barkingly implausible innovation will come along between now and 2061 and turn everything we do upside down
  • My crystal ball is currently predicting that base load electricity will come from a mix of advanced nuclear fission reactor designs and predictable renewables such as tidal and hydroelectric power.
  • We are, I think, going to have molecular nanotechnology and atomic scale integrated circuitry.
  • engineered solutions that work a bit like biological systems
  • Mature nanotechnology is going to resemble organic life forms the way a Boeing 737 resembles thirty tons of seagull biomass.
  • without a technological civilization questions of network security take second place to where to get a new flint arrowhead.
  • if we’re still alive in the 26th century you’re welcome to remind me of what I got wrong in this talk.
  • we’re living through the early days of a revolution in genomics and biology
  • We haven’t yet managed to raise the upper limit on human life expectancy (it’s currently around 120 years), but an increasing number of us are going to get close to it.
  • it’s quite likely that within another century the mechanisms underlying cellular senescence will be understood and treatable like other inborn errors of metabolism
  • another prediction: something outwardly resembling democracy everywhere.
  • Since 1911, democractic government by a republic has gone from being an eccentric minority practice to the default system of government world-wide
  • Democracy is a lousy form of government in some respects – it is particularly bad at long-term planning, for no event that lies beyond the electoral event horizon can compel a politician to pay attention to it
  • but it has two gigantic benefits: it handles transfers of power peacefully, and provides a pressure relief valve for internal social dissent.
  • there are problems
  • . In general, democratically elected politicians are forced to focus on short-term solutions to long-term problems because their performance is evaluated by elections held on a time scale of single-digit years
  • Democratic systems are prone to capture by special interest groups that exploit the information asymmetry that’s endemic in complex societies
  • The adversarial two-party model is a very bad tool for generating consensus on how to tackle difficult problems with no precedents
  • Finally, representative democracy scales up badly
  • Nor are governments as important as they used to be.
  • the US government, the largest superpower on the block right now, is tightly constrained by the international trade system it promoted in the wake of the second world war.
  • we have democratic forms of government, without the transparency and accountability.
  • At least, until we invent something better – which I expect will become an urgent priority before the end of the century.
  • The good news is, we’re a lot richer than our ancestors. Relative decline is not tragic in a positive-sum world.
  • Assuming that they survive the obstacles on the road to development, this process is going to end fairly predictably: both India and China will eventually converge with a developed world standard of living, while undergoing the demographic transition to stable or slowly declining populations that appears to be an inevitable correlate of development.
  • a quiet economic revolution is sweeping Africa
  • In 2006, for the first time, more than half of the planet’s human population lived in cities. And by 2061 I expect more than half of the planet’s human population will live in conditions that correspond to the middle class citizens of developed nations.
  • by 2061 we or our children are going to be living on an urban middle-class planet, with a globalized economic and financial infrastructure recognizably descended from today’s system, and governments that at least try to pay lip service to democratic norms.
  • And let me say, before I do, that the picture I just painted – of the world circa 2061, which is to say of the starting point from which the world of 2561 will evolve – is bunk.
  • It’s a normative projection
  • I’m pretty certain that something utterly unexpected will come along and up-end all these projections – something as weird as the world wide web would have looked in 1961.
  • And while the outer forms of that comfortable, middle-class urban developed-world planetary experience might look familiar to us, the internal architecture will be unbelievably different.
  • Let’s imagine that, circa 1961 – just fifty years ago – a budding Nikolai Tesla or Bill Packard somewhere in big-city USA is tinkering in his garage and succeeds in building a time machine. Being adventurous – but not too adventurous – he sets the controls for fifty years in the future, and arrives in downtown San Francisco. What will he see, and how will he interpret it?
  • a lot of the buildings are going to be familiar
  • Automobiles are automobiles, even if the ones he sees look kind of melted
  • Fashion? Hats are out, clothing has mutated in strange directions
  • He may be thrown by the number of pedestrians walking around with wires in their ears, or holding these cigarette-pack-sized boxes with glowing screens.
  • But there seem to be an awful lot of mad people walking around with bits of plastic clipped to their ears, talking to themselves
  • The outward shape of the future contains the present and the past, embedded within it like flies in amber.
  • Our visitor from 1961 is familiar with cars and clothes and buildings
  • But he hasn’t heard of packet switched networks
  • Our time traveller from 1961 has a steep learning curve if he wants to understand the technology the folks with the cordless headsets are using.
  • The social consequences of a new technology are almost always impossible to guess in advance.
  • Let me take mobile phones as an example. They let people talk to one another – that much is obvious. What is less obvious is that for the first time the telephone network connects people, not places
  • For example, we’re currently raising the first generation of kids who won’t know what it means to be lost – everywhere they go, they have GPS service and a moving map that will helpfully show them how to get wherever they want to go.
  • to our time traveller from 1961, it’s magic: you have a little glowing box, and if you tell it “I want to visit my cousin Bill, wherever he is,” a taxi will pull up and take you to Bill’s house
  • The whole question of whether a mature technosphere needs three or four billion full-time employees is an open one, as is the question of what we’re all going to do if it turns out that the future can’t deliver jobs.
  • We’re still in the first decade of mass mobile internet uptake, and we still haven’t seen what it really means when the internet becomes a pervasive part of our social environment, rather than something we have to specifically sit down and plug ourselves in to, usually at a desk.
  • So let me start by trying to predict the mobile internet of 2061.
  • the shape of the future depends on whether whoever provides the basic service of communication
  • funds their service by charging for bandwidth or charging for a fixed infrastructure cost.
  • These two models for pricing imply very different network topologies.
  • This leaves aside a third model, that of peer to peer mesh networks with no actual cellcos as such – just lots of folks with cheap routers. I’m going to provisionally assume that this one is hopelessly utopian
  • the security problems of a home-brew mesh network are enormous and gnarly; when any enterprising gang of scammers can set up a public router, who can you trust?
  • Let’s hypothesize a very high density, non-volatile serial storage medium that might be manufactured using molecular nanotechnology: I call it memory diamond.
  • wireless bandwidth appears to be constrained fundamentally by the transparency of air to electromagnetic radiation. I’ve seen some estimates that we may be able to punch as much as 2 tb/sec through air; then we run into problems.
  • What can you do with 2 terabits per second per human being on the planet?
  • One thing you can do trivially with that kind of capacity is full lifelogging for everyone. Lifelogging today is in its infancy, but it’s going to be a major disruptive technology within two decades.
  • the resulting search technology essentially gives you a prosthetic memory.
  • Lifelogging offers the promise of indexing and retrieving the unwritten and undocmented. And this is both a huge promise and an enormous threat.
  • Lifelogging raises huge privacy concerns, of course.
  • The security implications are monstrous: if you rely on lifelogging for your memory or your ability to do your job, then the importance of security is pushed down Maslow’s hierarchy of needs.
  • if done right, widespread lifelogging to cloud based storage would have immense advantages for combating crime and preventing identity theft.
  • whether lifelogging becomes a big social issue depends partly on the nature of our pricing model for bandwidth, and how we hammer out the security issues surrounding the idea of our sensory inputs being logged for posterity.
  • at least until the self-driving automobile matches and then exceeds human driver safety.
  • We’re currently living through a period in genomics research that is roughly equivalent to the early 1960s in computing.
  • In particular, there’s a huge boom in new technologies for high speed gene sequencing.
  • full genome sequencing for individuals now available for around US $30,000, and expected to drop to around $1000–3000 within a couple of years.
  • Each of us is carrying around a cargo of 1–3 kilograms of bacteria and other unicellular organisms, which collectively outnumber the cells of our own bodies by a thousand to one.
  • These are for the most part commensal organisms – they live in our guts and predigest our food, or on our skin – and they play a significant role in the functioning of our immune system.
  • Only the rapid development of DNA assays for SARS – it was sequenced within 48 hours of its identification as a new pathogenic virus – made it possible to build and enforce the strict quarantine regime that saved us from somewhere between two hundred million and a billion deaths.
  • A second crisis we face is that of cancer
  • we can expect eventually to see home genome monitoring – both looking for indicators of precancerous conditions or immune disorders within our bodies, and performing metagenomic analysis on our environment.
  • If our metagenomic environment is routinely included in lifelogs, we have the holy grail of epidemiology within reach; the ability to exhaustively track the spread of pathogens and identify how they adapt to their host environment, right down to the level of individual victims.
  • In each of these three examples of situations where personal privacy may be invaded, there exists a strong argument for doing so in the name of the common good – for prevention of epidemics, for prevention of crime, and for prevention of traffic accidents. They differ fundamentally from the currently familiar arguments for invasion of our data privacy by law enforcement – for example, to read our email or to look for evidence of copyright violation. Reading our email involves our public and private speech, and looking for warez involves our public and private assertion of intellectual property rights …. but eavesdropping on our metagenomic environment and our sensory environment impinges directly on the very core of our identities.
  • With lifelogging and other forms of ubiquitous computing mediated by wireless broadband, securing our personal data will become as important to individuals as securing our physical bodies.
  • the shifting sands of software obsolescence have for the most part buried our ancient learning mistakes.
  • So, to summarize: we’re moving towards an age where we may have enough bandwidth to capture pretty much the totality of a human lifespan, everything except for what’s going on inside our skulls.
  •  
    "Good afternoon, and thank you for inviting me to speak at USENIX Security." A fun read by Charlie Stoss."
  •  
    I feel like cancer may be a bit played up. I freak out more about dementia.
anonymous

Solar panels could destroy U.S. utilities, according to U.S. utilities - 0 views

  • That is not wild-eyed hippie talk. It is the assessment of the utilities themselves.
  • Back in January, the Edison Electric Institute — the (typically stodgy and backward-looking) trade group of U.S. investor-owned utilities — released a report [PDF] that, as far as I can tell, went almost entirely without notice in the press. That’s a shame. It is one of the most prescient and brutally frank things I’ve ever read about the power sector. It is a rare thing to hear an industry tell the tale of its own incipient obsolescence.
  • You probably know that electricity is provided by utilities. Some utilities both generate electricity at power plants and provide it to customers over power lines. They are “regulated monopolies,” which means they have sole responsibility for providing power in their service areas. Some utilities have gone through deregulation; in that case, power generation is split off into its own business, while the utility’s job is to purchase power on competitive markets and provide it to customers over the grid it manages.
  • ...19 more annotations...
  • But the main thing to know is that the utility business model relies on selling power. That’s how they make their money.
  • Here’s how it works: A utility makes a case to a public utility commission (PUC), saying “we will need to satisfy this level of demand from consumers, which means we’ll need to generate (or purchase) this much power, which means we’ll need to charge these rates.”
  • The thing to remember is that it is in a utility’s financial interest to generate (or buy) and deliver as much power as possible. The higher the demand, the higher the investments, the higher the utility shareholder profits.
  • Now, into this cozy business model enters cheap distributed solar PV, which eats away at it like acid.
  • First, the power generated by solar panels on residential or commercial roofs is not utility-owned or utility-purchased. From the utility’s point of view, every kilowatt-hour of rooftop solar looks like a kilowatt-hour of reduced demand for the utility’s product.
  • (This is the same reason utilities are instinctively hostile to energy efficiency and demand response programs, and why they must be compelled by regulations or subsidies to create them. Utilities don’t like reduced demand!)
  • It’s worse than that, though. Solar power peaks at midday, which means it is strongest close to the point of highest electricity use — “peak load.”
  • Problem is, providing power to meet peak load is where utilities make a huge chunk of their money. Peak power is the most expensive power. So when solar panels provide peak power, they aren’t just reducing demand, they’re reducing demand for the utilities’ most valuable product.
  • This is a widely held article of faith, but EEI (of all places!) puts it to rest. (In this and all quotes that follow, “DER” means distributed energy resources, which for the most part means solar PV.) Due to the variable nature of renewable DER, there is a perception that customers will always need to remain on the grid. While we would expect customers to remain on the grid until a fully viable and economic distributed non-variable resource is available, one can imagine a day when battery storage technology or micro turbines could allow customers to be electric grid independent. To put this into perspective, who would have believed 10 years ago that traditional wire line telephone customers could economically “cut the cord?” [Emphasis mine.]
  • Just the other day, Duke Energy CEO Jim Rogers said, “If the cost of solar panels keeps coming down, installation costs come down and if they combine solar with battery technology and a power management system, then we have someone just using [the grid] for backup.”
  • What happens if a whole bunch of customers start generating their own power and using the grid merely as backup? The EEI report warns of “irreparable damages to revenues and growth prospects” of utilities.
  • As ratepayers opt for solar panels (and other distributed energy resources like micro-turbines, batteries, smart appliances, etc.), it raises costs on other ratepayers and hurts the utility’s credit rating. As rates rise on other ratepayers, the attractiveness of solar increases, so more opt for it. Thus costs on remaining ratepayers are even further increased, the utility’s credit even further damaged. It’s a vicious, self-reinforcing cycle:
  • One implication of all this — a poorly understood implication — is that rooftop solar fucks up the utility model even at relatively low penetrations, because it goes straight at utilities’ main profit centers.
  • (“Despite all the talk about investors assessing the future in their investment evaluations,” the report notes dryly, “it is often not until revenue declines are reported that investors realize that the viability of the business is in question.” In other words, investors aren’t that smart and rational financial markets are a myth.)
  • So rates would rise by 20 percent for those without solar panels. Can you imagine the political shitstorm that would create? (There are reasons to think EEI is exaggerating this effect, but we’ll get into that in the next post.)
  • The report compares utilities’ possible future to the experience of the airlines during deregulation or to the big monopoly phone companies when faced with upstart cellular technologies.
  • In case the point wasn’t made, the report also analogizes utilities to the U.S. Postal Service, Kodak, and RIM, the maker of Blackberry devices. These are not meant to be flattering comparisons.
  • Remember, too, that these utilities are not Google or Facebook. They are not accustomed to a state of constant market turmoil and reinvention.
  • This is a venerable old boys network, working very comfortably within a business model that has been around, virtually unchanged, for a century.
  •  
    "Solar power and other distributed renewable energy technologies could lay waste to U.S. power utilities and burn the utility business model, which has remained virtually unchanged for a century, to the ground."
anonymous

U.S. Defense Policy in the Wake of the Ukrainian Affair - 1 views

  • There was a profoundly radical idea embedded in this line of thought. Wars between nations or dynastic powers had been a constant condition in Europe, and the rest of the world had been no less violent. Every century had had systemic wars in which the entire international system (increasingly dominated by Europe since the 16th century) had participated. In the 20th century, there were the two World Wars, in the 19th century the Napoleonic Wars, in the 18th century the Seven Years' War, and in the 17th century the Thirty Years' War.
  • Those who argued that U.S. defense policy had to shift its focus away from peer-to-peer and systemic conflict were in effect arguing that the world had entered a new era in which what had been previously commonplace would now be rare or nonexistent.
  • The radical nature of this argument was rarely recognized by those who made it, and the evolving American defense policy that followed this reasoning was rarely seen as inappropriate.
  • ...47 more annotations...
  • There were two reasons for this argument.
  • Military planners are always obsessed with the war they are fighting. It is only human to see the immediate task as a permanent task.
  • That generals always fight the last war must be amended to say that generals always believe the war they are fighting is the permanent war.
  • The second reason was that no nation-state was in a position to challenge the United States militarily.
  • After the Cold War ended, the United States was in a singularly powerful position. The United States remains in a powerful position, but over time, other nations will increase their power, form alliances and coalitions and challenge the United States.
  • No matter how benign a leading power is -- and the United States is not uniquely benign -- other nations will fear it, resent it or want to shame it for its behavior.
  • The idea that other nation-states will not challenge the United States seemed plausible for the past 20 years, but the fact is that nations will pursue interests that are opposed to American interest and by definition, pose a peer-to-peer challenge. The United States is potentially overwhelmingly powerful, but that does not make it omnipotent. 
  • It must also be remembered that asymmetric warfare and operations other than war always existed between and during peer-to-peer wars and systemic wars.
  • Asymmetric wars and operations other than war are far more common than peer-to-peer and systemic wars.
  • They can appear overwhelmingly important at the time. But just as the defeat of Britain by the Americans did not destroy British power, the outcomes of asymmetric wars rarely define long-term national power and hardly ever define the international system.
  • Asymmetric warfare is not a new style of war; it is a permanent dimension of warfare.
  • Peer-to-peer and systemic wars are also constant features but are far less frequent. They are also far more important.
  • There are a lot more asymmetric wars, but a defeat does not shift national power. If you lose a systemic war, the outcome can be catastrophic. 
  • A military force can be shaped to fight frequent, less important engagements or rare but critical wars -- ideally, it should be able to do both. But in military planning, not all wars are equally important.
  • Military leaders and defense officials, obsessed with the moment, must bear in mind that the war currently being fought may be little remembered, the peace that is currently at hand is rarely permanent, and harboring the belief that any type of warfare has become obsolete is likely to be in error.
  • Ukraine drove this lesson home. There will be no war between the United States and Russia over Ukraine. The United States does not have interests there that justify a war, and neither country is in a position militarily to fight a war. The Americans are not deployed for war, and the Russians are not ready to fight the United States.
  • But the events in Ukraine point to some realities.
  • First, the power of countries shifts, and the Russians had substantially increased their military capabilities since the 1990s.
  • Second, the divergent interests between the two countries, which seemed to disappear in the 1990s, re-emerged.
  • Third, this episode will cause each side to reconsider its military strategy and capabilities, and future crises might well lead to conventional war, nuclear weapons notwithstanding.
  • Ukraine reminds us that peer-to-peer conflict is not inconceivable, and that a strategy and defense policy built on the assumption has little basis in reality. The human condition did not transform itself because of an interregnum in which the United States could not be challenged; the last two decades are an exception to the rule of global affairs defined by war.
  • U.S. national strategy must be founded on the control of the sea. The oceans protect the United States from everything but terrorism and nuclear missiles.
  • The greatest challenge to U.S. control of the sea is hostile fleets. The best way to defeat hostile fleets is to prevent them from being built. The best way to do that is to maintain the balance of power in Eurasia. The ideal path for this is to ensure continued tensions within Eurasia so that resources are spent defending against land threats rather than building fleets. Given the inherent tensions in Eurasia, the United States needs to do nothing in most cases. In some cases it must send military or economic aid to one side or both. In other cases, it advises. 
  • The main goal here is to avoid the emergence of a regional hegemon fully secure against land threats and with the economic power to challenge the United States at sea.
  • The U.S. strategy in World War I was to refuse to become involved until it appeared, with the abdication of the czar and increasing German aggression at sea, that the British and French might be defeated or the sea-lanes closed.
  • At that point, the United States intervened to block German hegemony. In World War II, the United States remained out of the war until after the French collapsed and it appeared the Soviet Union would collapse -- until it seemed something had to be done.
  • Even then, it was only after Hitler's declaration of war on the United States after the Japanese attack on Pearl Harbor that Congress approved Roosevelt's plan to intervene militarily in continental Europe.
  • And in spite of operations in the Mediterranean, the main U.S. thrust didn't occur until 1944 in Normandy, after the German army had been badly weakened.
  • In order for this strategy, which the U.S. inherited from the British, to work, the United States needs an effective and relevant alliance structure.
  • The balance-of-power strategy assumes that there are core allies who have an interest in aligning with the United States against regional enemies. When I say effective, I mean allies that are capable of defending themselves to a great extent. Allying with the impotent achieves little. By relevant, I mean allies that are geographically positioned to deal with particularly dangerous hegemons.
  • If we assume Russians to be dangerous hegemons, then the relevant allies are those on the periphery of Russia.
  • The American relationship in all alliances is that the outcome of conflicts must matter more to the ally than to the United States. 
  • The point here is that NATO, which was extremely valuable during the Cold War, may not be a relevant or effective instrument in a new confrontation with the Russians.
  • And since the goal of an effective balance-of-power strategy is the avoidance of war while containing a rising power, the lack of an effective deterrence matters a great deal.
  • It is not certain by any means that Russia is the main threat to American power.
  • In these and other potential cases, the ultimate problem for the United States is that its engagement in Eurasia is at distance. It takes a great deal of time to deploy a technology-heavy force there, and it must be technology-heavy because U.S. forces are always outnumbered when fighting in Eurasia.
  • In many cases, the United States is not choosing the point of intervention, but a potential enemy is creating a circumstance where intervention is necessary. Therefore, it is unknown to planners where a war might be fought, and it is unknown what kind of force they will be up against.
  • The only thing certain is that it will be far away and take a long time to build up a force. During Desert Storm, it took six months to go on the offensive.
  • American strategy requires a force that can project overwhelming power without massive delays.
  • In Ukraine, for example, had the United States chosen to try to defend eastern Ukraine from Russian attack, it would have been impossible to deploy that force before the Russians took over.
  • The United States will face peer-to-peer or even systemic conflicts in Eurasia. The earlier the United States brings in decisive force, the lower the cost to the United States.
  • Current conventional war-fighting strategy is not dissimilar from that of World War II: It is heavily dependent on equipment and the petroleum to power that equipment.
  • It also follows that the tempo of operations be reduced. The United States has been in constant warfare since 2001.
  • There need to be layers of options between threat and war. 
  • Defense policy must be built on three things: The United States does not know where it will fight. The United States must use war sparingly. The United States must have sufficient technology to compensate for the fact that Americans are always going to be outnumbered in Eurasia. The force that is delivered must overcome this, and it must get there fast.
  • Ranges of new technologies, from hypersonic missiles to electronically and mechanically enhanced infantryman, are available. But the mindset that peer-to-peer conflict has been abolished and that small unit operations in the Middle East are the permanent features of warfare prevent these new technologies from being considered.
  • Losing an asymmetric war is unfortunate but tolerable. Losing a systemic war could be catastrophic. Not having to fight a war would be best.
  •  
    "Ever since the end of the Cold War, there has been an assumption that conventional warfare between reasonably developed nation-states had been abolished. During the 1990s, it was expected that the primary purpose of the military would be operations other than war, such as peacekeeping, disaster relief and the change of oppressive regimes. After 9/11, many began speaking of asymmetric warfare and "the long war." Under this model, the United States would be engaged in counterterrorism activities in a broad area of the Islamic world for a very long time. Peer-to-peer conflict seemed obsolete."
anonymous

Russian Modernization, Part 1: Laying the Groundwork - 0 views

  • Russia’s long-term survival depends on such modernization, but the process will require changes and compromise within the Kremlin.
  • But this trip has a different focus for the Russians. Russia is launching a massive modernization program that involves seriously upgrading — if not building from scratch — many key economic sectors, including space, energy, telecommunications, transportation, nanotechnology, military industry and information technology.
  • Moscow has seen incredible success at home and in its near abroad. Now the plan is to make it last as long as possible.
  • ...24 more annotations...
  • two factors that could keep Russia from remaining strong enough
  • First, Russia is suffering from an extreme demographic crisis
  • Russia’s current labor force is already considerably less productive than that of other industrialized nations
  • Second, Russia’s indigenous capital resources are insufficient to maintain its current economic structure
  • Russia is starved for capital because of its infrastructural needs, security costs, chronic low economic productivity, harsh climate and geography.
  • Russia is looking to import the capital, technology and expertise needed to launch Russia forward 30 years technologically
  • Russia has traditionally lagged behind Western nations in the fields of military, transportation, industry and technology but has employed periodic breakneck modernization programs
  • Czar Peter I implemented the massive Westernization campaign
  • Czarina Catherine II continued the Westernization in 1765
  • Soviet leader Josef Stalin implemented rapid industrialization in Russia in the 1920s
  • Mikhail Gorbachev opened the nation to modern technology during Perestroika
  • Russian leaders would throw incredible amounts of human labor at the modernization, not caring if it crushed the population in the process
  • this push for modernization requires the importation of highly qualified people who have trained for years, if not decades.
  • Moscow feels more secure in reaching out to the West for such deals because it has already expanded and consolidated much of its near abroad.
  • The Kremlin must first do several things
  • First, Russia will have to change its restrictive laws against foreign investment and businesses
  • Second, Russia has to moderate anti-Western elements of its foreign policy implemented from 2005 to 2008 to show that the country is pragmatic when it comes to foreigners.
  • Third, Russia will have to decide which investors and businesses to invite into the country.
  • The Kremlin must calculate how far it can modernize without compromising the core of Russia, which depends on domestic consolidation and national security above everything else.
  • Trying to balance modernization with control is the most crucial dilemma facing Moscow — something that has split the government into three camps.
  • the Kremlin
  • the conservatives
  • the third group
  • whether it succeeds or fails, Russia’s current attempt at modernization will determine Moscow’s foreign and economic policy for the next few years
  •  
    June 23, 2010
anonymous

Bill Gates Funds Seawater Cloud Seeding, "the Most Benign Form of Geoengineering" - 0 views

  • a fleet of 1,900 ships costing £5 billion (about $7.5 billion) could arrest the rise in temperature by criss-crossing the oceans and spraying seawater from tall funnels to whiten clouds and increase their reflectivity [The Times].
  • Armand Neukermanns, who is leading the research, said that whitening clouds was “the most benign form of engineering” because, while it might alter rainfall, the effects would cease soon after the machines were switched off [The Times]
  • the billionaire former head of Microsoft announced he’s give nearly $5 million of his fortune to fund research into geoengineering projects.
  •  
    By Andrew Moseman under 80beats at Discover Magazine.
anonymous

Technological Superstition - 0 views

  • The genius of modern mass production was the machine's ability to make cheap identical copies of any invention -- unlike the uneven creations of mortal craftsmen.
  • Hemingway's personal typewriters (he had more than one) are treated like relics. They are roped off, no touching them, they've become the object of pilgrimages, fetching more than $100,000. Yet, the venerated typewriter itself is indistinguishable from other units made on that assembly line.
  • Relics are common in all the major religions of the world.
  • ...8 more annotations...
  • The logic of relics is supernatural.
  • This relic magic operates at full throttle in the world of modern celebrity collectors. The $3 hockey puck used in the 2010 gold medal Olympics championship game later sold for $13,000 because of the unique properties it acquired during the game.
  • Provenance is a key notion in relics and collectables.
  • It establishes a chain of claims about previous ownership.
  • But provenance itself does not explain why we assign any special meaning to the artifact, or to the clone.
  • Yet as we approach the tenth anniversary of the disasters of 9/11, there is an official campaign to assign supernatural potency to the remains of the World Trade Center. The twisted bits of steel salvaged from the site of the fallen towers are being treated as holy relics, taken on a long processions for public viewing, while the disaster site itself is being described as a "sacred place."
  • There is certainly value in keeping old things. Museums that collect artifacts, like say the Computer History Museum, contain both original prototypes and arbitrary production-run units, and these contain great historical information and lessons. But it doesn't (or shouldn't) matter who touched or used them previously. Manufactured artifacts can't be relics. They are all clones.
  • Of course, there is no difference, which is why we place so much emphasis on provenance ("it's been in our family forever!"). In the end, a historical technological artifact is one of the reservoirs in the modern world where superstition still flows freely.
  •  
    By Kevin Kelly at The Technium: "Superstition is alive and well in the high tech world. It is visible most prominently in our technological artifacts, some of which we treat like medieval relics. Recently, supernatural superstition has crept into American treatment of 9/11."
anonymous

The Technium: The Average Place on Earth - 1 views

  • I describe this global system of technology deployed around the planet as an emerging superorganism. It consists of roads, electric lines, telephone cables, buildings, water systems, dams, satellites, ocean buoys and ships, all our computers and data centers, and all 6 billion humans. But while this superorganism of new and old technology operates at the planetary scale, and reaches all continents, and spans the oceans, and reaches into orbital space, it is a thin and uneven layer on the globe. In fact most of the planet, on average, is in a very primitive state.
  • Let's draw a grid around the globe with lines that form a square approximately every 100 km (at the equator). At every intersection of these grid lines we'll take a picture for inspection. There are about 10,000 intersections over the land part of this planet. They will give us a very good statistical portrait of what this planet looks like on land. Shown are 6,000 images of a possible 10,000 degree intersections on land.
  • The imaginary grid is the longitude and latitude grid, and somewhat remarkably, over 6,000 of the 10,000 intersections have already been photographed. Intrepid volunteers sign up at a web site called the Degree Confluence that is half art-project, and half adventure storytelling in order to select an intersection somewhere on the globe to visit --no matter how wild -- and record their success with photographs including a legible snapshot of their gps proving a bonafide "even" lat-long reading with lots of zeros.
  • ...2 more annotations...
  • The resultant grid of photos is very revealing (below). Here is a portion of southern China, one of the most densely settled regions on the planet. Each image is one degree intersection. There is hardly a building in site. And for a place that has been intensely farmed for centuries if not millennia, there is a surprising lot of wildness. What it does to show is urbanization.
  • Projections for the year 2050 predict that most of the 8 billion people on the planet will live in megacities, with populations over 30 million. And these megacity clusters will form a network made up of smaller cities over 1 million in population. But these incredibly dense clusters will weave through a countryside that is emptying. It is already common to find entire villages in China, India, and South America abandoned by its inhabitants who fled to the swelling cities, leaving behind a few old folks, or often, no one at all. This is the pattern on Earth. Extremely dense and vast populations in a network of megacities connected to each other with nerves of roads and wires, woven over an empty landscape of wild land, marginal pastures, and lightly populated farms. By 2050 and beyond, Earth will be a urban planet, while the average place on the planet will be nearly wild.
  •  
    "Technology and human activity are so global that they operate together as if they were a geological force. Civilization is altering the climate in the same way that volcanoes do and have done; our agriculture alters the biosphere the way climate has in the past; and now megacities are altering the planetary balances of heat and sea level. The technium is a planetary event."
  •  
    That opening paragraph is a keeper.
anonymous

Jaron Lanier's Ignorance Of History, Basic Economics And Efficiency Is Getting Ridiculous - 1 views

  • The Kodak/Instagram comparison comes up over and over again, and it's moronic. It makes no sense. To demonstrate, let's take something else that's old and something else that's modern that sorta-kinda seems similar, and compare the two: Very, very, very few people make money "auctioning" goods via Christie's. Yet, a few years ago, eBay noted that 724,000 Americans made their primary or secondary incomes from eBay sales, with another 1.5 million supplementing their income. In the simplistic world of Jaron Lanier, this should be proof that eBay is good, and Christie's is bad, right? But, of course that's silly.
  • The fact that Instagram only employed a few people and Kodak employed a lot says nothing about the impact of technology on modern society or the economic status of the middle class.
  • First off, it didn't involve toxic chemicals that create massive amounts of waste and pollution. Second, because people don't have to buy expensive rolls of film to take pictures any more, they get to save money and put it to better use. Third, because we no longer have to worry about the expense of each photo, people are free to take many more photos and capture more memories and generally enjoy photography more. Fourth, because instagram makes the sharing of photos much easier, it enables much greater communication among family and friends, building stronger community bonds. I mean, you could go on and on and on.
  • ...10 more annotations...
  • “At the height of its power, agriculture employed 90 percent of the population and produced output worth vastly more than half of U.S. GDP. It even invented countless plant hybrids and animal breeds. But today nearly all farms of the past have gone bankrupt (or, seeing the economic writing on the wall, were transformed to other uses). Agriculture today employs only about one percent of the workforce. Where did all those jobs disappear? And what happened to the wealth that all those good agricultural jobs created?”
  • Economic efficiency often shifts jobs around, but creates a much larger pie, which leads to new job creation. We can reasonably question whether the there are people who get left behind, or what kinds of skills are favored as industries become obsolete, but the idea that it destroys a middle class is just silly.
  • We kind of made a bargain, a social contract, in the 20th century that even if jobs were pleasant people could still get paid for them. Because otherwise we would have had a massive unemployment. And so to my mind, the right question to ask is, why are we abandoning that bargain that worked so well? When did "we" make this "bargain" and, honestly, what is he talking about? There was no such bargain made. Jobs have nothing to do with whether they are "pleasant." And we didn't create jobs to avoid unemployment. We created jobs because there was demand for work, meaning there was demand for products and services, just as there still is today.
  • New jobs were created because of demand, and because new technologies create efficiencies which create and enable new jobs. It has nothing to do with "decisions" being made or "social contracts." It has to do with efficiency and new things being enabled through innovation.
  • This is the broken window fallacy exploded exponentially for a digital era. It seems to assume that the only "payment" is monetary. That is, if you do something for free online -- share a video or a photo, like a link, listen to a song -- that you're somehow getting screwed because some company gets that info and you're not getting paid.
  • But that's ridiculous. The people are getting "paid" in the form of the benefit they get: free hosting and software for hosting/streaming videos and pictures, free ability to communicate easily with friends, access to music, etc. The list goes on and on, but Lanier seems to not understand the idea that there are non-monetary benefits, which is why various online services which he seems to hate are so popular.
    • anonymous
       
      Whuffie!
  • A token few will find success on Kickstarter or YouTube, while overall wealth is ever more concentrated and social mobility rots. Social media sharers can make all the noise they want, but they forfeit the real wealth and clout needed to be politically powerful. Real wealth and clout instead concentrate ever more on the shrinking island occupied by elites who run the most powerful computers.
  • This is bullshit, plain and simple. Under the "old" system, you had a smaller "token few" who found success via getting a major label contract or having a publisher accept them into the club of published authors.
  • It's as if Lanier is talking about a mythical past that never existed to make some point about the future. But all of the evidence suggests that more people are now able to make use of these tools to create new incomes and new opportunities to make money, while in the past you had to wait for some gatekeeper.
  • Lanier, a beneficiary of the old gatekeepers, may like the old system, but he's confused about history, facts, reality and economics in making this ridiculous argument -- and it's a shame that those interviewing him or publishing his ridiculously misinformed screeds don't seem to ever challenge him on his claims.
    • anonymous
       
      Given the Gladwellian attention he's getting, this would seem prudent. If there *is* something of value in there, let's use that wacky, radical tool: science - to figure it out. :)
  •  
    "So... we'd already taken a stab at debunking Jaron Lanier's "gobbledygook economics" a few weeks back when it started appearing, but since then there's been more Lanier everywhere (obviously, in coordination with his book release), and each time it seems more ridiculous than the last. Each time, the focus is on the following economically ridiculous concepts: (1) there should be micropayments for anyone doing anything free online because someone benefits somewhere (2) modern efficiency via technology has destroyed the middle class. Both of these claims make no sense at all. "
anonymous

Disruption guru Christensen: Why Apple, Tesla, VCs, academia may die - 0 views

  • If a newcomer thinks it can win by competing at the high end, “the incumbents will always kill you.” If they come in at the bottom of the market and offer something that at first is not as good, the legacy companies won’t feel threatened until too late, after the newcomers have gained a foothold in the market. He offered as an example the introduction of cheap transistor radios. High fidelity, vacuum-tube powered incumbents felt no threat from the poor quality audio the transistors produced and missed the technological shift that eventually killed many of them.
  • Instead of coming in at the low end of the market with a cheap electric vehicle, Tesla Motors competes with premium offerings from legacy automakers. “Who knows whether they will be successful or not,” he said. “They have come up with cars that in fact compete reasonably well and they cost $100,000 and god bless them.” “But if you really want to make a big product market instead of a niche product market, the kind of question you want to ask for electric vehicles is, I wonder if there is a market out there for customers who would just love to have a product that won’t go very far or go very fast. The answer is obvious.
  • “The parents of teenagers would love to have a car that won’t go very far or go very fast. They could just cruise around the neighborhood, drive it to school, see their friends, plug it in overnight.” Because that kind of electric car offers something that doesn’t threaten incumbents and provides a low-end solution, Christensen says that has a greater chance of surviving and ultimately upending the auto market than Tesla’s flashy Roadsters and sedans.
  • ...5 more annotations...
  • Christensen said he thinks the venture capital world needs to be disrupted because it is focused too much on making big killings on big investments at a time when there are plenty of good smaller investments to be made on companies that will be disruptive.
  • “Venture capital is always wanting to go up market. It’s like the Rime of the Ancient Mariner. 'Water, water everywhere and not a drop to drink.' People in private equity complain that they have so much capital and so few places to invest. But you have lots of entrepreneurs trying to raise money at the low end and find that they can’t get funding because of this mismatch. I think that there is an opportunity there.”
  • “For 300 years, higher education was not disruptable because there was no technological core. If San Jose State wants to become a globally known research institution, they have to emulate UC Berkeley and Cal Tech. They can’t disrupt,” he said on Wednesday.
  • “But now online learning brings to higher education this technological core, and people who are very complacent are in deep trouble. The fact that everybody was trying to move upmarket and make their university better and better and better drove prices of education up to where they are today.
  • “Fifteen years from now more than half of the universities will be in bankruptcy, including the state schools. In the end, I am excited to see that happen.”
  •  
    "Basically, his theory of disruption centers around how dominant industry leaders will react to a newcomer: "It allows you to predict whether you will kill the incumbents or whether the incumbents will kill you.""
anonymous

In Japan, the Fax Machine Is Anything but a Relic - 0 views

  • The Japanese government’s Cabinet Office said that almost 100 percent of business offices and 45 percent of private homes had a fax machine as of 2011.
  • “There is still something in Japanese culture that demands the warm, personal feelings that you get with a handwritten fax,” said Mr. Sugahara, 43.
  • Japan’s reluctance to give up its fax machines offers a revealing glimpse into an aging nation that can often seem quietly determined to stick to its tried-and-true ways, even if the rest of the world seems to be passing it rapidly by. The fax addiction helps explain why Japan, which once revolutionized consumer electronics with its hand-held calculators, Walkmans and, yes, fax machines, has become a latecomer in the digital age, and has allowed itself to fall behind nimbler competitors like South Korea and China.
    • anonymous
       
      This would sure explain Nintendo's baffling lag in the online arena.
  •  
    "Japan is renowned for its robots and bullet trains, and has some of the world's fastest broadband networks. But it also remains firmly wedded to a pre-Internet technology - the fax machine - that in most other developed nations has joined answering machines, eight-tracks and cassette tapes in the dustbin of outmoded technologies."
anonymous

World War II and the Origins of American Unease - 0 views

  • The first thing that leaps to mind is the manner in which World War II began for the three great powers: the United States, the Soviet Union and the United Kingdom.
  • For all three, the war started with a shock that redefined their view of the world.
  • There was little doubt among American leaders that war with Japan was coming. The general public had forebodings, but not with the clarity of its leaders.
  • ...27 more annotations...
  • Neither the leaders nor the public thought the Japanese were nearly so competent.
  • Pearl Harbor intersected with another shock to the American psyche — the Great Depression. These two events shared common characteristics:
  • First, they seemed to come out of nowhere.
  • This introduced a new dimension into American culture.
  • The Great Depression and Pearl Harbor created a different sensibility that suspected that prosperity and security were an illusion, with disaster lurking behind them.
  • The two shocks created a dark sense of foreboding that undergirds American society to this day.
  • Catastrophe therefore might come at any moment. The American approach to the Cold War is symbolized by Colorado's Cheyenne Mountain.
  • The Americans analyzed their forced entry into World War II and identified what they took to be the root cause: the Munich Agreement allowing Nazi Germany to annex parts of Czechoslovakia.
  • If the origin of World War II was the failure to take pre-emptive action against the Germans in 1938, then it followed that the Pacific War might have been prevented by more aggressive actions early on.
  • Acting early and decisively remains the foundation of U.S. foreign policy to this day. The idea that not acting in a timely and forceful fashion led to World War II underlies much American discourse on Iran or Russia.
  • Pearl Harbor (and the 1929 crash) not only led to a sense of foreboding and a distrust in the wisdom of political and military leaders, but it also replaced a strategy of mobilization after war begins, with a strategy of permanent mobilization.
  • The Soviet Union had its own Pearl Harbor on June 22, 1941, when the Germans invaded in spite of the friendship treaty signed between them in 1939.
  • That treaty was struck for two reasons: First, the Russians couldn't persuade the British or French to sign an anti-Hitler pact. Second, a treaty with Hitler would allow the Soviets to move their border further west without firing a shot.
  • The Soviets made a single miscalculation: They assumed a German campaign in France would replay the previous Great War.
  • That the moment of attack was a surprise compounded the challenge, but the real problem was strategic miscalculation, not simply an intelligence or command failure.
  • The Soviet forces were not ready for an attack, and their strategy collapsed with France, so the decision for war was entirely Germany's.
  • What the Soviets took away from the June 1941 invasion was a conviction that political complexity could not substitute for a robust military. The United States ended World War II with the conviction that a core reason for that war was the failure of the United States. The Soviets ended World War II with the belief that their complex efforts at coalition building and maintaining the balance of power had left them utterly exposed by one miscalculation on France — one that defied the conventional wisdom.
  • The Warsaw Pact was less an alliance than a geopolitical reality. For the most part it consisted of states under the direct military, intelligence or political control of the Soviet Union. The military value of the block might be limited, and its room for maneuver was equally limited.
  • nuclear attack was not the Soviet's primordial fear, though the fear must not be minimized. The primordial fear in Moscow was an attack from the West. The Soviet Union's strategy was to position its own forces as far to the west as possible.
  • The Soviets were not ideologues. They were geopoliticians, and China represented a potential threat that the Soviets could not control. Ideology didn't matter. China would never serve the role that Poland had to. The Sino-Soviet relationship fell apart fairly quickly.
  • Beneath communist fervor, cynical indifference and dread of the Soviet secret police, the Russians found something new while the Americans found something old.
  • The collapse of France caused them to depend on only two things:
  • One was that the English Channel, combined with the fleet and the Royal Air Force, would hold the Germans at bay. The second was that in due course, the United States would be drawn into the war. Their two calculations proved correct.
  • The Americans did not take the British Empire. It was taken away by the shocking performance of the French. On paper, the French had an excellent army — superior to the Germans, in many ways. Yet they collapsed in weeks. If we were to summarize the British sensibility, after defiance came exhaustion and then resentment.
  • The Americans retain their dread even though World War II was in many ways good to the United States. It ended the Great Depression, and in the aftermath, between the G.I. Bill, VA loans and the Interstate Highway System, the war created the American professional middle class, with private homes for many and distance and space that could be accessed easily.
  • Rather than a Machiavellian genius, Putin is the heir to the German invasion on June 22, 1941. He seeks strategic depth controlled by his own military. And his public has rallied to him.
  • While we are celebrating the end of World War II, it is useful to examine its beginnings. So much of what constitutes the political-military culture, particularly of the Americans, was forged by the way that World War II began.
  •  
    "We are at the 70th anniversary of the end of World War II in Europe. That victory did not usher in an era of universal peace. Rather, it introduced a new constellation of powers and a complex balance among them. Europe's great powers and empires declined, and the United States and the Soviet Union replaced them, performing an old dance to new musical instruments. Technology, geopolitics' companion, evolved dramatically as nuclear weapons, satellites and the microchip - among myriad wonders and horrors - changed not only the rules of war but also the circumstances under which war was possible. But one thing remained constant: Geopolitics, technology and war remained inseparable comrades."
1 - 20 of 199 Next › Last »
Showing 20 items per page