Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Economist

Rss Feed Group items tagged

Weiye Loh

The X Factor of Economics - People - NYTimes.com - 0 views

  • generally speaking, economists who thought it was a good idea at the time think it worked, and economists who thought otherwise beg to differ. And both sides make their cases with plenty of hard numbers.
  • Why do economists argue at all? Given that Fed members and economists are looking at the same data, and given the reams of evidence accumulated over decades — not to mention a few centuries of great minds, great theories and thick books that preceded this crisis — why isn’t a right answer self-evident?
  • the limits of economics is a subject that many in the field have been discussing for years, in print, in discussions with each other, and, in the case of Robert Solow, Nobel Prize winner and M.I.T. professor emeritus, with graduate students. “I talk about what it is about economics and economic life that leads to differences of opinion,” Mr. Solow said. “One point I always make to my graduate students is, avoid sound bites. Never sound more certain than you are.”
  • ...8 more annotations...
  • the world doesn’t offer up clean economic experiments is a common refrain in the discipline
  • It’s not just that there is so little clear signal amid so much noise. It’s that many economists have a unique idea of what signal to listen to and what priority it deserves.
  • another great variable: personal values.
  • Economics, Mr. Mankiw concludes, won’t tell us, definitively, whether Peter or Paula is paying too much, because an answer inevitably leads to matters of values, which inevitably leads to different answers.
  • This is not to suggest that economics is a total free-for-all, lacking a broad consensus on any subject. Polls of economists have found near unanimity on topics like tariffs and import quotas (bad), centralized economies (very bad) and flexible, floating exchange rates (very good).
  • economics will forever have to contend with the biggest X factor of all: people.
  • certain amount of psychological guesswork is part of an economist’s job, which accounts for the rise in popularity of behavioral economics, an effort to account for the slippery, indefinite nexus of money and humans.
  • there’s a good reason that human irrationality isn’t part of the standard economic models, and this gets to the dilemma of economics.
  •  
    On why economists disagree: The X Factor of Economics: People
Weiye Loh

When Value Judgments Masquerade as Science - NYTimes.com - 0 views

  • Most people think of the term in the context of production of goods and services: more efficient means more valuable output is wrung from a given bundle of real resources (which is good) or that fewer real resources are burned up to produce a given output (which is also good).
  • In economics, efficiency is also used to evaluate alternative distributions of an available set of goods and services among members of society. In this context, I distinguished in last week’s post between changes in public policies (reallocations of economic welfare) that make some people feel better off and none feel worse off and those that make some people feel better off but others feel worse off.
  • consider whether economists should ever become advocates for a revaluation of China’s currency, the renminbi — or, alternatively, for imposing higher tariffs on Chinese imports. Such a policy would tend to improve the lot of shareholders and employees of manufacturers competing with Chinese imports. Yet it would make American consumers of Chinese goods worse off. If the renminbi were significantly and artificially undervalued against the United States dollar, relative to a free-market exchange rate without government intervention, that would be tantamount to China running a giant, perennial sale on Chinese goods sold to the United States. If you’re an American consumer, what’s not to like about that? So why are so many economists advocating an end to this sale?
  • ...9 more annotations...
  • Strict constructionists argue that their analyses should confine themselves strictly to positive (that is, descriptive) analysis: identify who wins and who loses from a public policy, and how much, but leave judgments about the social merits of the policy to politicians.
  • a researcher’s political ideology or vested interest in a particular theory can still enter even ostensibly descriptive analysis by the data set chosen for the research; the mathematical transformations of raw data and the exclusion of so-called outlier data; the specific form of the mathematical equations posited for estimation; the estimation method used; the number of retrials in estimation to get what strikes the researcher as “plausible” results, and the manner in which final research findings are presented. This is so even among natural scientists discussing global warming. As the late medical journalist Victor Cohn once quoted a scientist, “I would not have seen it if I did not believe it.”
  • anyone who sincerely believes that seemingly scientific, positive research in the sciences — especially the social sciences — is invariably free of the researcher’s own predilections is a Panglossian optimist.
  • majority of economists have been unhappy for more than a century with the limits that the strict constructionist school would place upon their professional purview. They routinely do enter the forum in which public policy is debated
  • The problem with welfare analysis is not so much that ethical dimensions typically enter into it, but that economists pretend that is not so. They do so by justifying their normative dicta with appeal to the seemly scientific but actually value-laden concept of efficiency.
  • economics is not a science that only describes, measures, explains and predicts human interests, values and policies — it also evaluates, promotes, endorses or rejects them. The predicament of economics and all other social sciences consists in their failure to acknowledge honestly their value orientation in their pathetic and inauthentic pretension to emulate the natural sciences they presume to be value free.
  • By the Kaldor-Hicks criterion, a public policy is judged to enhance economic efficiency and overall social welfare — and therefore is to be recommended by economists to decision-makers — if those who gain from the policy could potentially bribe those who lose from it into accepting it and still be better off (Kaldor), or those who lose from it were unable to bribe the gainers into forgoing the policy (Hicks). That the bribe was not paid merely underscores the point.
  • In applications, the Kaldor-Hicks criterion and the efficiency criterion amount to the same thing. When Jack gains $10 and Jill loses $5, social gains increase by $5, so the policy is a good one. When Jack gains $10 and Jill loses $15, there is a deadweight loss of $5, so the policy is bad. Evidently, on the Kaldor-Hicks criterion one need not know who Jack and Jill are, nor anything about their economic circumstances. Furthermore, a truly stunning implication of the criterion is that if a public policy takes $X away from one citizen and gives it to another, and nothing else changes, then such a policy is welfare neutral. Would any non-economist buy that proposition?
  • Virtually all modern textbooks in economics base their treatment of efficiency on Kaldor-Hicks, usually without acknowledging the ethical dimensions of the concept. I use these texts in my economics courses as, I suppose, do most my colleagues around the world. But I explicitly alert my students to the ethical pitfalls in normative welfare economics, with commentaries such as “How Economists Bastardized Benthamite Utilitarianism” and “The Welfare Economics of Health Insurance,” or with assignments that force students to think about this issue. My advice to students and readers is: When you hear us economists wax eloquent on the virtue of greater efficiency — beware!
  •  
    When Value Judgments Masquerade as Science
Weiye Loh

Uwe E. Reinhardt: How Convincing Is the Economists' Case for Free Trade? - NYTimes.com - 0 views

  • “Emerging Markets as Partners, Not Rivals,” a fine commentary in The New York Times on Sunday by N. Gregory Mankiw of Harvard prompted me to take a vacation from the dreariness of health policy to visit one of the economic profession’s intellectual triumphs: the theory that every country gains by unfettered international trade.
  • That theory is less popular among noneconomists, especially politicians and unions. They wring their hands at what is called offshoring of jobs and often have no problem obstructing free trade with such barriers as tariffs or import quotas, which they deem in the national interest. (Two blogs recently offered examples of this posture.)
  • Economists assert that over the longer run, the owners of businesses that lose their markets in international competition and their employees will shift into new economic endeavors in which they can function more competitively. Skeptics, of course, often respond with the retort of John Maynard Keynes: “In the long run, we’re all dead.”
  • ...3 more annotations...
  • this truth, which economists hold self-evident: Relative to a status quo of no or limited international trade, permitting full free trade across borders will leave in its wake some immediate losers, but citizens who gain from such trade gain much more than the losers lose. On a net basis, therefore, each nation gains over all from such trade.
  • In their work, economists are typically are not nationalistic. National boundaries mean little to them, other than that much data happen to be collected on a national basis. Whether a fellow American gains from a trade or someone in Shanghai does not make any difference to most economists, nor does it matter to them where the losers from global competition live, in America or elsewhere.
  • I say most economists, because here and there one can find some who do seem to worry about how fellow Americans fare in the matter of free trade. In a widely noted column in The Washington Post, “Free Trade’s Great, but Offshoring Rattles Me,” for example, my Princeton colleague Alan Blinder wrote: I’m a free trader down to my toes. Always have been. Yet lately, I’m being treated as a heretic by many of my fellow economists. Why? Because I have stuck my neck out and predicted that the offshoring of service jobs from rich countries such as the United States to poor countries such as India may pose major problems for tens of millions of American workers over the coming decades. In fact, I think offshoring may be the biggest political issue in economics for a generation. When I say this, many of my fellow free traders react with a mixture of disbelief, pity and hostility. Blinder, have you lost your mind? Professor Blinder has estimated that 30 million to 40 million jobs in the United States are potentially offshorable — including those of scientists, mathematicians, radiologists and editors on the high end of the market, and those of telephone operators, clerks and typists on the low end. He says he is rattled by the question of how our country will cope with this phenomenon, especially in view of our tattered social safety net. “That is why I am going public with my concerns now,” he concludes. “If we economists stubbornly insist on chanting ‘free trade is good for you’ to people who know that it is not, we will quickly become irrelevant to the public debate. Compared with that, a little apostasy should be welcome.
Weiye Loh

Breakthrough Europe: A (Heterodox) Lesson in Economics from Ha-Joon Chang - 0 views

  • But, to the surprise of the West, that steel mill grew out to be POSCO, the world's third-largest and Asia's most profitable steel maker.
  • South Korea's developmental state, which relied on active government investment in R&D and crucial support for capital-intensive sectors in the form of start-up subsidies and infant industry protection, transformed the country into the richest on the Asian continent (with the exception of Singapore and Hong Kong). LG and Hyundai are similar legacies of Korea's spectacular industrial policy success.
  • Even though they were not trained as economists, the economic officials of East Asia knew some economics. However, especially until the 1970s, the economics they knew was mostly not of the free-market variety. The economics they happened to know was the economics of Karl Marx, Friedrich List, Joseph Schumpeter, Nicholas Kaldor and Albert Hirschman. Of course, these economists lived in different times, contended with different problems and had radically differing political views (ranging from the very right-wing List to the very left-wing Marx). However, there was a commonality between their economics. It was the recognition that capitalism develops through long-term investments and technological innovations that transform the productive structure, and not merely an expansion of existing structures, like inflating a balloon.
  • ...5 more annotations...
  • Arguing that governments can pick winners, Professor Chang urges us to reclaim economic planning, not as a token of centrally-planned communism, but rather as the simple reality behind our market economies today:
  • Capitalist economies are in large part planned. Governments in capitalist economies practice planning too, albeit on a more limited basis than under communist central planning. All of them finance a significant share of investment in R&D and infrastructure. Most of them plan a significant chunk of the economy through the planning of the activities of state-owned enterprises. Many capitalist governments plan the future shape of individual industrial sectors through sectoral industrial policy or even that of the national economy through indicative planning. More importantly, modern capitalist economies are made up of large, hierarchical corporations that plan their activities in great detail, even across national borders. Therefore, the question is not whether you plan or not. It is about planning the right things at the right levels.
  • Drawing a clear distinction between communist central planning and capitalist 'indicative' planning, Chang notes that the latter: ... involves the government ... setting some broad targets concerning key economic variables (e.g., investments in strategic industries, infrastructure development, exports) and working with, not against, the private sector to achieve them. Unlike under central planning, these targets are not legally binding; hence the adjective 'indicative'. However, the government will do its best to achieve them by mobilizing various carrots (e.g., subsidies, granting of monopoly rights) and sticks (e.g., regulations, influence through state-owned banks) at its disposal.
  • Chang observes that: France had great success in promoting investment and technological innovation through indicative planning in the 1950s and 60s, thereby overtaking the British economy as Europe's second industrial power. Other European countries, such as Finland, Norway and Austria, also successfully used indicative planning to upgrade their economies between the 1950s and the 1970s. The East Asian miracle economies of Japan, Korea and Taiwan used indicative planning too between the 1950s and 1980s. This is not to say that all indicative planning exercises have been successful; in India, for example, it has not. Nevertheless, the European and East Asian examples show that planning in certain forms is not incompatible with capitalism and may even promote capitalist development very well.
  • As we have argued before, the current crisis raging through Europe (in large part caused by free-market economics), forces us to reconsider our economic options. More than ever before, now is the time to rehabilitate indicative planning and industrial policy as key levers in our arsenal of policy tools.
  •  
    heterodox Cambridge economist exposes 23 myths behind the neoliberal free-market dogma and urges us to recognize that "capitalism develops through long-term investments and technological innovations," spearheaded by an activist state committed to sustainable economic development.
Weiye Loh

The Breakthrough Institute: New Report: How Efficiency Can Increase Energy Consumption - 0 views

  • There is a large expert consensus and strong evidence that below-cost energy efficiency measures drive a rebound in energy consumption that erodes much and in some cases all of the expected energy savings, concludes a new report by the Breakthrough Institute. "Energy Emergence: Rebound and Backfire as Emergent Phenomena" covers over 96 published journal articles and is one of the largest reviews of the peer-reviewed journal literature to date. (Readers in a hurry can download Breakthrough's PowerPoint demonstration here or download the full paper here.)
  • In a statement accompanying the report, Breakthrough Institute founders Ted Nordhaus and Michael Shellenberger wrote, "Below-cost energy efficiency is critical for economic growth and should thus be aggressively pursued by governments and firms. However, it should no longer be considered a direct and easy way to reduce energy consumption or greenhouse gas emissions." The lead author of the new report is Jesse Jenkins, Breakthrough's Director of Energy and Climate Policy; Nordhaus and Shellenberger are co-authors.
  • The findings of the new report are significant because governments have in recent years relied heavily on energy efficiency measures as a means to cut greenhouse gases. "I think we have to have a strong push toward energy efficiency," said President Obama recently. "We know that's the low-hanging fruit, we can save as much as 30 percent of our current energy usage without changing our quality of life." While there is robust evidence for rebound in academic peer-reviewed journals, it has largely been ignored by major analyses, including the widely cited 2009 McKinsey and Co. study on the cost of reducing greenhouse gases.
  • ...2 more annotations...
  • The idea that increased energy efficiency can increase energy consumption at the macro-economic level strikes many as a new idea, or paradoxical, but it was first observed in 1865 by British economist William Stanley Jevons, who pointed out that Watt's more efficient steam engine and other technical improvements that increased the efficiency of coal consumption actually increased rather than decreased demand for coal. More efficient engines, Jevons argued, would increase future coal consumption by lowering the effective price of energy, thus spurring greater demand and opening up useful and profitable new ways to utilize coal. Jevons was proven right, and the reality of what is today known as "Jevons Paradox" has long been uncontroversial among economists.
  • Economists have long observed that increasing the productivity of any single factor of production -- whether labor, capital, or energy -- increases demand for all of those factors. This is one of the basic dynamics of economic growth. Luddites who feared there would be fewer jobs with the emergence of weaving looms were proved wrong by lower price for woven clothing and demand that has skyrocketed (and continued to increase) ever since. And today, no economist would posit that an X% improvement in labor productivity would lead directly to an X% reduction in employment. In fact, the opposite is widely expected: labor productivity is a chief driver of economic growth and thus increases in employment overall. There is no evidence, the report points out, that energy is any different, as per capita energy consumption everywhere on earth continues to rise, even as economies become more efficient each year.
Weiye Loh

Is 'More Efficient' Always Better? - NYTimes.com - 1 views

  • Efficiency is the seemingly value-free standard economists use when they make the case for particular policies — say, free trade, more liberal immigration policies, cap-and-trade policies on environmental pollution, the all-volunteer army or congestion tolls. The concept of efficiency is used to justify a reliance on free-market principles, rather than the government, to organize the health care sector, or to make recommendations on taxation, government spending and monetary policy. All of these public policies have one thing in common: They create winners and losers among members of society.
  • can it be said that a more efficient resource allocation is better than a less efficient one, given the changes in the distribution of welfare among members of society that these allocations imply?
  • Suppose a restructuring of the economy has the effect of increasing the growth of average gross domestic product per capita, but that the benefits of that growth accrue disproportionately disproportionately to a minority of citizens, while others are worse off as a result, as appears to have been the case in the United States in the last several decades. Can economists judge this to be a good thing?
  • ...1 more annotation...
  • Indeed, how useful is efficiency as a normative guide to public policy? Can economists legitimately base their advocacy of particular policies on that criterion? That advocacy, especially when supported by mathematical notation and complex graphs, may look like economic science. But when greater efficiency is accompanied by a redistribution of economic privilege in society, subjective ethical dimensions inevitably get baked into the economist’s recommendations.
  •  
    Is 'More Efficient' Always Better?
Weiye Loh

Hayek, The Use of Knowledge in Society | Library of Economics and Liberty - 0 views

  • the "data" from which the economic calculus starts are never for the whole society "given" to a single mind which could work out the implications and can never be so given.
  • The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess.
  • The economic problem of society
  • ...14 more annotations...
  • is a problem of the utilization of knowledge which is not given to anyone in its totality.
  • who is to do the planning. It is about this question that all the dispute about "economic planning" centers. This is not a dispute about whether planning is to be done or not. It is a dispute as to whether planning is to be done centrally, by one authority for the whole economic system, or is to be divided among many individuals. Planning in the specific sense in which the term is used in contemporary controversy necessarily means central planning—direction of the whole economic system according to one unified plan. Competition, on the other hand, means decentralized planning by many separate persons. The halfway house between the two, about which many people talk but which few like when they see it, is the
  • Which of these systems is likely to be more efficient depends mainly on the question under which of them we can expect that fuller use will be made of the existing knowledge.
  • It may be admitted that, as far as scientific knowledge is concerned, a body of suitably chosen experts may be in the best position to command all the best knowledge available—though this is of course merely shifting the difficulty to the problem of selecting the experts.
  • Today it is almost heresy to suggest that scientific knowledge is not the sum of all knowledge. But a little reflection will show that there is beyond question a body of very important but unorganized knowledge which cannot possibly be called scientific in the sense of knowledge of general rules: the knowledge of the particular circumstances of time and place. It is with respect to this that practically every individual has some advantage over all others because he possesses unique information of which beneficial use might be made, but of which use can be made only if the decisions depending on it are left to him or are made with his active coöperation.
  • the relative importance of the different kinds of knowledge; those more likely to be at the disposal of particular individuals and those which we should with greater confidence expect to find in the possession of an authority made up of suitably chosen experts. If it is today so widely assumed that the latter will be in a better position, this is because one kind of knowledge, namely, scientific knowledge, occupies now so prominent a place in public imagination that we tend to forget that it is not the only kind that is relevant.
  • It is a curious fact that this sort of knowledge should today be generally regarded with a kind of contempt and that anyone who by such knowledge gains an advantage over somebody better equipped with theoretical or technical knowledge is thought to have acted almost disreputably. To gain an advantage from better knowledge of facilities of communication or transport is sometimes regarded as almost dishonest, although it is quite as important that society make use of the best opportunities in this respect as in using the latest scientific discoveries.
  • The common idea now seems to be that all such knowledge should as a matter of course be readily at the command of everybody, and the reproach of irrationality leveled against the existing economic order is frequently based on the fact that it is not so available. This view disregards the fact that the method by which such knowledge can be made as widely available as possible is precisely the problem to which we have to find an answer.
  • One reason why economists are increasingly apt to forget about the constant small changes which make up the whole economic picture is probably their growing preoccupation with statistical aggregates, which show a very much greater stability than the movements of the detail. The comparative stability of the aggregates cannot, however, be accounted for—as the statisticians occasionally seem to be inclined to do—by the "law of large numbers" or the mutual compensation of random changes.
  • the sort of knowledge with which I have been concerned is knowledge of the kind which by its nature cannot enter into statistics and therefore cannot be conveyed to any central authority in statistical form. The statistics which such a central authority would have to use would have to be arrived at precisely by abstracting from minor differences between the things, by lumping together, as resources of one kind, items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision. It follows from this that central planning based on statistical information by its nature cannot take direct account of these circumstances of time and place and that the central planner will have to find some way or other in which the decisions depending on them can be left to the "man on the spot."
  • We need decentralization because only thus can we insure that the knowledge of the particular circumstances of time and place will be promptly used. But the "man on the spot" cannot decide solely on the basis of his limited but intimate knowledge of the facts of his immediate surroundings. There still remains the problem of communicating to him such further information as he needs to fit his decisions into the whole pattern of changes of the larger economic system.
  • The problem which we meet here is by no means peculiar to economics but arises in connection with nearly all truly social phenomena, with language and with most of our cultural inheritance, and constitutes really the central theoretical problem of all social science. As Alfred Whitehead has said in another connection, "It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them." This is of profound significance in the social field. We make constant use of formulas, symbols, and rules whose meaning we do not understand and through the use of which we avail ourselves of the assistance of knowledge which individually we do not possess. We have developed these practices and institutions by building upon habits and institutions which have proved successful in their own sphere and which have in turn become the foundation of the civilization we have built up.
  • To assume all the knowledge to be given to a single mind in the same manner in which we assume it to be given to us as the explaining economists is to assume the problem away and to disregard everything that is important and significant in the real world.
  • That an economist of Professor Schumpeter's standing should thus have fallen into a trap which the ambiguity of the term "datum" sets to the unwary can hardly be explained as a simple error. It suggests rather that there is something fundamentally wrong with an approach which habitually disregards an essential part of the phenomena with which we have to deal: the unavoidable imperfection of man's knowledge and the consequent need for a process by which knowledge is constantly communicated and acquired. Any approach, such as that of much of mathematical economics with its simultaneous equations, which in effect starts from the assumption that people's knowledge corresponds with the objective facts of the situation, systematically leaves out what is our main task to explain. I am far from denying that in our system equilibrium analysis has a useful function to perform. But when it comes to the point where it misleads some of our leading thinkers into believing that the situation which it describes has direct relevance to the solution of practical problems, it is high time that we remember that it does not deal with the social process at all and that it is no more than a useful preliminary to the study of the main problem.
  •  
    The Use of Knowledge in Society Hayek, Friedrich A.(1899-1992)
Weiye Loh

How wise are crowds? - 0 views

  • n the past, economists trying to model the propagation of information through a population would allow any given member of the population to observe the decisions of all the other members, or of a random sampling of them. That made the models easier to deal with mathematically, but it also made them less representative of the real world.
    • Weiye Loh
       
      Random sampling is not representative
  • this paper does is add the important component that this process is typically happening in a social network where you can’t observe what everyone has done, nor can you randomly sample the population to find out what a random sample has done, but rather you see what your particular friends in the network have done,” says Jon Kleinberg, Tisch University Professor in the Cornell University Department of Computer Science, who was not involved in the research. “That introduces a much more complex structure to the problem, but arguably one that’s representative of what typically happens in real settings.”
    • Weiye Loh
       
      So random sampling is actually more accurate?
  • Earlier models, Kleinberg explains, indicated the danger of what economists call information cascades. “If you have a few crucial ingredients — namely, that people are making decisions in order, that they can observe the past actions of other people but they can’t know what those people actually knew — then you have the potential for information cascades to occur, in which large groups of people abandon whatever private information they have and actually, for perfectly rational reasons, follow the crowd,”
  • ...8 more annotations...
  • The MIT researchers’ paper, however, suggests that the danger of information cascades may not be as dire as it previously seemed.
  • a mathematical model that describes attempts by members of a social network to make binary decisions — such as which of two brands of cell phone to buy — on the basis of decisions made by their neighbors. The model assumes that for all members of the population, there is a single right decision: one of the cell phones is intrinsically better than the other. But some members of the network have bad information about which is which.
  • The MIT researchers analyzed the propagation of information under two different conditions. In one case, there’s a cap on how much any one person can know about the state of the world: even if one cell phone is intrinsically better than the other, no one can determine that with 100 percent certainty. In the other case, there’s no such cap. There’s debate among economists and information theorists about which of these two conditions better reflects reality, and Kleinberg suggests that the answer may vary depending on the type of information propagating through the network. But previous models had suggested that, if there is a cap, information cascades are almost inevitable.
  • if there’s no cap on certainty, an expanding social network will eventually converge on an accurate representation of the state of the world; that wasn’t a big surprise. But they also showed that in many common types of networks, even if there is a cap on certainty, convergence will still occur.
  • people in the past have looked at it using more myopic models,” says Acemoglu. “They would be averaging type of models: so my opinion is an average of the opinions of my neighbors’.” In such a model, Acemoglu says, the views of people who are “oversampled” — who are connected with a large enough number of other people — will end up distorting the conclusions of the group as a whole.
  • What we’re doing is looking at it in a much more game-theoretic manner, where individuals are realizing where the information comes from. So there will be some correction factor,” Acemoglu says. “If I’m seeing you, your action, and I’m seeing Munzer’s action, and I also know that there is some probability that you might have observed Munzer, then I discount his opinion appropriately, because I know that I don’t want to overweight it. And that’s the reason why, even though you have these influential agents — it might be that Munzer is everywhere, and everybody observes him — that still doesn’t create a herd on his opinion.”
  • the new paper leaves a few salient questions unanswered, such as how quickly the network will converge on the correct answer, and what happens when the model of agents’ knowledge becomes more complex.
  • the MIT researchers begin to address both questions. One paper examines rate of convergence, although Dahleh and Acemoglu note that that its results are “somewhat weaker” than those about the conditions for convergence. Another paper examines cases in which different agents make different decisions given the same information: some people might prefer one type of cell phone, others another. In such cases, “if you know the percentage of people that are of one type, it’s enough — at least in certain networks — to guarantee learning,” Dahleh says. “I don’t need to know, for every individual, whether they’re for it or against it; I just need to know that one-third of the people are for it, and two-thirds are against it.” For instance, he says, if you notice that a Chinese restaurant in your neighborhood is always half-empty, and a nearby Indian restaurant is always crowded, then information about what percentages of people prefer Chinese or Indian food will tell you which restaurant, if either, is of above-average or below-average quality.
  •  
    By melding economics and engineering, researchers show that as social networks get larger, they usually get better at sorting fact from fiction.
Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
Weiye Loh

Scientist Beloved by Climate Deniers Pulls Rug Out from Their Argument - Environment - ... - 0 views

  • One of the scientists was Richard Muller from University of California, Berkeley. Muller has been working on an independent project to better estimate the planet's surface temperatures over time. Because he is willing to say publicly that he has some doubts about the accuracy of the temperature stations that most climate models are based on, he has been embraced by the science denying crowd.
  • A Koch brothers charity, for example, has donated nearly 25 percent of the financial support provided to Muller's project.
  • Skeptics of climate science have been licking their lips waiting for his latest research, which they hoped would undermine the data behind basic theories of anthropogenic climate change. At the hearing today, however, Muller threw them for a loop with this graph:
  • ...3 more annotations...
  • Muller's data (black line) tracks pretty well with the three established data sets. This is just an initial sampling of Muller's data—just 2 percent of the 1.6 billion records he's working with—but these early findings are incredibly consistent with the previous findings
  • In his testimony, Muller made these points (emphasis mine): The Berkeley Earth Surface Temperature project was created to make the best possible estimate of global temperature change using as complete a record of measurements as possible and by applying novel methods for the estimation and elimination of systematic biases. We see a global warming trend that is very similar to that previously reported by the other groups. The world temperature data has sufficient integrity to be used to determine global temperature trends. Despite potential biases in the data, methods of analysis can be used to reduce bias effects well enough to enable us to measure long-term Earth temperature changes. Data integrity is adequate. Based on our initial work at Berkeley Earth, I believe that some of the most worrisome biases are less of a problem than I had previously thought.
  • For the many climate deniers who hang their arguments on Muller's "doubts," this is a severe blow. Of course, when the hard scientific truths are inconvenient, climate denying House leaders can always call a lawyer, a marketing professor, and an economist into the scientific hearing.
  •  
    Today, there was a climate science hearing in the House Committee on Science, Space, and Technology. Of the six "expert" witnesses, only three were scientists. The others were an economist, a lawyer, and a professor of marketing. One of the scientists was Richard Muller from University of California, Berkeley. Muller has been working on an independent project to better estimate the planet's surface temperatures over time. Because he is willing to say publicly that he has some doubts about the accuracy of the temperature stations that most climate models are based on, he has been embraced by the science denying crowd. A Koch brothers charity, for example, has donated nearly 25 percent of the financial support provided to Muller's project.
Weiye Loh

More Than 1 Billion People Are Hungry in the World - By Abhijit Banerjee and Esther Duf... - 0 views

  • We were starting to feel very bad for him and his family, when we noticed the TV and other high-tech gadgets. Why had he bought all these things if he felt the family did not have enough to eat? He laughed, and said, "Oh, but television is more important than food!"
  • For many in the West, poverty is almost synonymous with hunger. Indeed, the announcement by the United Nations Food and Agriculture Organization in 2009 that more than 1 billion people are suffering from hunger grabbed headlines in a way that any number of World Bank estimates of how many poor people live on less than a dollar a day never did. COMMENTS (7) SHARE: Twitter   Reddit   Buzz   More... But is it really true? Are there really more than a billion people going to bed hungry each night?
  • unfortunately, this is not always the world as the experts view it. All too many of them still promote sweeping, ideological solutions to problems that defy one-size-fits-all answers, arguing over foreign aid, for example, while the facts on the ground bear little resemblance to the fierce policy battles they wage.
  • ...9 more annotations...
  • Jeffrey Sachs, an advisor to the United Nations and director of Columbia University's Earth Institute, is one such expert. In books and countless speeches and television appearances, he has argued that poor countries are poor because they are hot, infertile, malaria-infested, and often landlocked; these factors, however, make it hard for them to be productive without an initial large investment to help them deal with such endemic problems. But they cannot pay for the investments precisely because they are poor -- they are in what economists call a "poverty trap." Until something is done about these problems, neither free markets nor democracy will do very much for them.
  • But then there are others, equally vocal, who believe that all of Sachs's answers are wrong. William Easterly, who battles Sachs from New York University at the other end of Manhattan, has become one of the most influential aid critics in his books, The Elusive Quest for Growth and The White Man's Burden. Dambisa Moyo, an economist who worked at Goldman Sachs and the World Bank, has joined her voice to Easterly's with her recent book, Dead Aid. Both argue that aid does more bad than good. It prevents people from searching for their own solutions, while corrupting and undermining local institutions and creating a self-perpetuating lobby of aid agencies.
  • The best bet for poor countries, they argue, is to rely on one simple idea: When markets are free and the incentives are right, people can find ways to solve their problems. They do not need handouts from foreigners or their own governments.
  • According to Easterly, there is no such thing as a poverty trap.
  • To find out whether there are in fact poverty traps, and, if so, where they are and how to help the poor get out of them, we need to better understand the concrete problems they face. Some aid programs help more than others, but which ones? Finding out required us to step out of the office and look more carefully at the world. In 2003, we founded what became the Abdul Latif Jameel Poverty Action Lab, or J-PAL. A key part of our mission is to research by using randomized control trials -- similar to experiments used in medicine to test the effectiveness of a drug -- to understand what works and what doesn't in the real-world fight against poverty. In practical terms, that meant we'd have to start understanding how the poor really live their lives.
  • Take, for example, Pak Solhin, who lives in a small village in West Java, Indonesia. He once explained to us exactly how a poverty trap worked. His parents used to have a bit of land, but they also had 13 children and had to build so many houses for each of them and their families that there was no land left for cultivation. Pak Solhin had been working as a casual agricultural worker, which paid up to 10,000 rupiah per day (about $2) for work in the fields. A recent hike in fertilizer and fuel prices, however, had forced farmers to economize. The local farmers decided not to cut wages, Pak Solhin told us, but to stop hiring workers instead. As a result, in the two months before we met him in 2008, he had not found a single day of agricultural labor. He was too weak for the most physical work, too inexperienced for more skilled labor, and, at 40, too old to be an apprentice. No one would hire him.
  • Pak Solhin, his wife, and their three children took drastic steps to survive. His wife left for Jakarta, some 80 miles away, where she found a job as a maid. But she did not earn enough to feed the children. The oldest son, a good student, dropped out of school at 12 and started as an apprentice on a construction site. The two younger children were sent to live with their grandparents. Pak Solhin himself survived on the roughly 9 pounds of subsidized rice he got every week from the government and on fish he caught at a nearby lake. His brother fed him once in a while. In the week before we last spoke with him, he had eaten two meals a day for four days, and just one for the other three.
  • Pak Solhin appeared to be out of options, and he clearly attributed his problem to a lack of food. As he saw it, farmers weren't interested in hiring him because they feared they couldn't pay him enough to avoid starvation; and if he was starving, he would be useless in the field. What he described was the classic nutrition-based poverty trap, as it is known in the academic world. The idea is simple: The human body needs a certain number of calories just to survive. So when someone is very poor, all the food he or she can afford is barely enough to allow for going through the motions of living and earning the meager income used to buy that food. But as people get richer, they can buy more food and that extra food goes into building strength, allowing people to produce much more than they need to eat merely to stay alive. This creates a link between income today and income tomorrow: The very poor earn less than they need to be able to do significant work, but those who have enough to eat can work even more. There's the poverty trap: The poor get poorer, and the rich get richer and eat even better, and get stronger and even richer, and the gap keeps increasing.
  • But though Pak Solhin's explanation of how someone might get trapped in starvation was perfectly logical, there was something vaguely troubling about his narrative. We met him not in war-infested Sudan or in a flooded area of Bangladesh, but in a village in prosperous Java, where, even after the increase in food prices in 2007 and 2008, there was clearly plenty of food available and a basic meal did not cost much. He was still eating enough to survive; why wouldn't someone be willing to offer him the extra bit of nutrition that would make him productive in return for a full day's work? More generally, although a hunger-based poverty trap is certainly a logical possibility, is it really relevant for most poor people today? What's the best way, if any, for the world to help?
Weiye Loh

Income inequality: Rich and poor, growing apart | The Economist - 0 views

  • THINK income inequality growth is primarily an American phenomenon?  Think again:American society is more unequal than those in most other OECD countries, and growth in inequality there has been relatively large. But with very few exceptions, the rich have done better over the past 30 years, even in highly egalitarian places like Scandinavia.
  • Over the past decades, OECD countries have undergone significant structural changes resulting from their closer integration into a global economy and rapid technological progress. These changes have brought higher rewards for high-skilled workers and thus affected the way earnings from work are distributed. The skills gap in earnings reflects several factors. First, a rapid rise in trade and financial markets integration has generated a relative shift in labour demand in favour of high-skilled workers at the expense of low-skilled labour. Second, technical progress has shifted production technologies in both industries and services in favour of skilled labour...Finally, during the past two decades most OECD countries carried out regulatory reforms to strengthen competition in the markets for goods and services and associated reforms that aimed at making labour markets more adaptable. For instance, anti-competitive product-market regulations were reduced significantly in all countries. Employment protection legislation for workers with temporary contracts also became more lenient in many countries. Minimum wages, relative to average wages, have also declined in a number of countries since the 1980s. Wage-setting mechanisms have also changed; the share of union members among workers has fallen across most countries, although the coverage of collective bargaining has generally remained rather stable over time. In a number of countries, unemployment benefit replacement rates fell, and in an attempt to promote employment among low-skilled workers, taxes on labour for low-income workers were also reduced.
  • It's tempting to look at this list of regulatory changes and argue that it was these rule changes which facilitated growth in inequality. That may be true to some extent, but the unverisality of the reform experience makes me think it's at least as likely that underlying trends (like globalisation and technological change) made the prevailing rules unsustainable.
  • ...1 more annotation...
  • it's critical to address this issue if popular support for liberal economic activity is to be maintained.
  •  
    while national factors can influence the degree of inequality growth and can mitigate (or not) the negative impacts of that growth, there seem to be broader, global forces pushing inequality up across countries.
Weiye Loh

Does "Inclusion" Matter for Open Government? (The Answer Is, Very Much Indeed... - 0 views

  • But in the context of the Open Government Partnership and the 70 or so countries that have already committed themselves to this or are in the process I’m not sure that the world can afford to wait to see whether this correlation is direct, indirect or spurious especially if we can recognize that in the world of OGP, the currency of accumulation and concentration is not raw economic wealth but rather raw political power.
  • in the same way as there appears to be an association between the rise of the Internet and increasing concentrations of wealth one might anticipate that the rise of Internet enabled structures of government might be associated with the increasing concentration of political power in fewer and fewer hands and particularly the hands of those most adept at manipulating the artifacts and symbols of the new Internet age.
  • I am struck by the fact that while the OGP over and over talks about the importance and value and need for Open Government there is no similar or even partial call for Inclusive Government.  I’ve argued elsewhere how “Open”, in the absence of attention being paid to ensuring that the pre-conditions for the broadest base of participation will almost inevitably lead to the empowerment of the powerful. What I fear with the OGP is that by not paying even a modicum of attention to the issue of inclusion or inclusive development and participation that all of the idealism and energy that is displayed today in Brasilia is being directed towards the creation of the Governance equivalents of the Internet billionaires whatever that might look like.
  • ...1 more annotation...
  • crowd sourced public policy
  •  
    alongside the rise of the Internet and the empowerment of the Internet generation has emerged the greatest inequalities of wealth and privilege that any of the increasingly Internet enabled economies/societies have experienced at least since the great Depression and perhaps since the beginnings of systematic economic record keeping.  The association between the rise of inequality and the rise of the Internet has not yet been explained and if may simply be a coincidence but somehow I'm doubtful and we await a newer generation of rather more critical and less dewey economists to give us the models and explanations for this co-evolution.
Weiye Loh

The future of customer support: Outsourcing is so last year | The Economist - 0 views

  • Gartner, the research company, estimates that using communities to solve support issues can reduce costs by up to 50%. When TomTom, a maker of satellite-navigation systems, switched on social support, members handled 20,000 cases in its first two weeks and saved it around $150,000. Best Buy, an American gadget retailer, values its 600,000 users at $5m annually. 
  •  
    "Unsourcing", as the new trend has been dubbed, involves companies setting up online communities to enable peer-to-peer support among users. Instead of speaking with a faceless person thousands of miles away, customers' problems are answered by individuals in the same country who have bought and used the same products. This happens either on the company's own website or on social networks like Facebook and Twitter, and the helpers are generally not paid anything for their efforts.
Weiye Loh

Letter from China: China and the Unofficial Truth : The New Yorker - 0 views

  • Chinese citizens are busy dissecting and taunting the meeting on social media. While Premier Wen Jiabao was pledging that the government would “quickly” reverse the widening gap between rich and poor—last year he said it would do so “gradually”—Chinese Web users were scrutinizing photos of delegates arriving for the meeting, and posting photos of their nine-hundred dollar Hermès belts and Birkin and Celine and Louis Vuitton purses that retail for car prices. As Danwei points out, an image that has been making the rounds with particular relish shows the C.E.O. of China Power International Development Ltd, Li Xiaolin, in a salmon-colored suit from Emilio Pucci’s spring-summer 2012 collection—price: nearly two thousand dollars. Web user Cairangduoji paired her photo with the image of dirt-covered barefoot kids in the countryside and the comment: “That amount could help two hundred children wear warm clothes, and avoid the chilly attacks of winter.” And it appended a quote from Li, of the salmon suit, who purportedly once said, “I think we should open a morality file on all citizens to control everyone and give them a ‘sense of shame.’” (This is no ordinary delegate: Li Xiaolin happens to be the daughter of former Premier Li Peng, who oversaw the crackdown at Tiananmen Square.)
  • Another message making the rounds uses an official high-res photo of the gathering to zoom in on delegates who were captured fast asleep or typing on their smart phones.
Weiye Loh

New voting methods and fair elections : The New Yorker - 0 views

  • history of voting math comes mainly in two chunks: the period of the French Revolution, when some members of France’s Academy of Sciences tried to deduce a rational way of conducting elections, and the nineteen-fifties onward, when economists and game theorists set out to show that this was impossible
  • The first mathematical account of vote-splitting was given by Jean-Charles de Borda, a French mathematician and a naval hero of the American Revolutionary War. Borda concocted examples in which one knows the order in which each voter would rank the candidates in an election, and then showed how easily the will of the majority could be frustrated in an ordinary vote. Borda’s main suggestion was to require voters to rank candidates, rather than just choose one favorite, so that a winner could be calculated by counting points awarded according to the rankings. The key idea was to find a way of taking lower preferences, as well as first preferences, into account.Unfortunately, this method may fail to elect the majority’s favorite—it could, in theory, elect someone who was nobody’s favorite. It is also easy to manipulate by strategic voting.
  • If the candidate who is your second preference is a strong challenger to your first preference, you may be able to help your favorite by putting the challenger last. Borda’s response was to say that his system was intended only for honest men.
  • ...15 more annotations...
  • After the Academy dropped Borda’s method, it plumped for a simple suggestion by the astronomer and mathematician Pierre-Simon Laplace, who was an important contributor to the theory of probability. Laplace’s rule insisted on an over-all majority: at least half the votes plus one. If no candidate achieved this, nobody was elected to the Academy.
  • Another early advocate of proportional representation was John Stuart Mill, who, in 1861, wrote about the critical distinction between “government of the whole people by the whole people, equally represented,” which was the ideal, and “government of the whole people by a mere majority of the people exclusively represented,” which is what winner-takes-all elections produce. (The minority that Mill was most concerned to protect was the “superior intellects and characters,” who he feared would be swamped as more citizens got the vote.)
  • The key to proportional representation is to enlarge constituencies so that more than one winner is elected in each, and then try to align the share of seats won by a party with the share of votes it receives. These days, a few small countries, including Israel and the Netherlands, treat their entire populations as single constituencies, and thereby get almost perfectly proportional representation. Some places require a party to cross a certain threshold of votes before it gets any seats, in order to filter out extremists.
  • The main criticisms of proportional representation are that it can lead to unstable coalition governments, because more parties are successful in elections, and that it can weaken the local ties between electors and their representatives. Conveniently for its critics, and for its defenders, there are so many flavors of proportional representation around the globe that you can usually find an example of whatever point you want to make. Still, more than three-quarters of the world’s rich countries seem to manage with such schemes.
  • The alternative voting method that will be put to a referendum in Britain is not proportional representation: it would elect a single winner in each constituency, and thus steer clear of what foreigners put up with. Known in the United States as instant-runoff voting, the method was developed around 1870 by William Ware
  • In instant-runoff elections, voters rank all or some of the candidates in order of preference, and votes may be transferred between candidates. The idea is that your vote may count even if your favorite loses. If any candidate gets more than half of all the first-preference votes, he or she wins, and the game is over. But, if there is no majority winner, the candidate with the fewest first-preference votes is eliminated. Then the second-preference votes of his or her supporters are distributed to the other candidates. If there is still nobody with more than half the votes, another candidate is eliminated, and the process is repeated until either someone has a majority or there are only two candidates left, in which case the one with the most votes wins. Third, fourth, and lower preferences will be redistributed if a voter’s higher preferences have already been transferred to candidates who were eliminated earlier.
  • At first glance, this is an appealing approach: it is guaranteed to produce a clear winner, and more voters will have a say in the election’s outcome. Look more closely, though, and you start to see how peculiar the logic behind it is. Although more people’s votes contribute to the result, they do so in strange ways. Some people’s second, third, or even lower preferences count for as much as other people’s first preferences. If you back the loser of the first tally, then in the subsequent tallies your second (and maybe lower) preferences will be added to that candidate’s first preferences. The winner’s pile of votes may well be a jumble of first, second, and third preferences.
  • Such transferrable-vote elections can behave in topsy-turvy ways: they are what mathematicians call “non-monotonic,” which means that something can go up when it should go down, or vice versa. Whether a candidate who gets through the first round of counting will ultimately be elected may depend on which of his rivals he has to face in subsequent rounds, and some votes for a weaker challenger may do a candidate more good than a vote for that candidate himself. In short, a candidate may lose if certain voters back him, and would have won if they hadn’t. Supporters of instant-runoff voting say that the problem is much too rare to worry about in real elections, but recent work by Robert Norman, a mathematician at Dartmouth, suggests otherwise. By Norman’s calculations, it would happen in one in five close contests among three candidates who each have between twenty-five and forty per cent of first-preference votes. With larger numbers of candidates, it would happen even more often. It’s rarely possible to tell whether past instant-runoff elections have gone topsy-turvy in this way, because full ballot data aren’t usually published. But, in Burlington’s 2006 and 2009 mayoral elections, the data were published, and the 2009 election did go topsy-turvy.
  • Kenneth Arrow, an economist at Stanford, examined a set of requirements that you’d think any reasonable voting system could satisfy, and proved that nothing can meet them all when there are more than two candidates. So designing elections is always a matter of choosing a lesser evil. When the Royal Swedish Academy of Sciences awarded Arrow a Nobel Prize, in 1972, it called his result “a rather discouraging one, as regards the dream of a perfect democracy.” Szpiro goes so far as to write that “the democratic world would never be the same again,
  • There is something of a loophole in Arrow’s demonstration. His proof applies only when voters rank candidates; it would not apply if, instead, they rated candidates by giving them grades. First-past-the-post voting is, in effect, a crude ranking method in which voters put one candidate in first place and everyone else last. Similarly, in the standard forms of proportional representation voters rank one party or group of candidates first, and all other parties and candidates last. With rating methods, on the other hand, voters would give all or some candidates a score, to say how much they like them. They would not have to say which is their favorite—though they could in effect do so, by giving only him or her their highest score—and they would not have to decide on an order of preference for the other candidates.
  • One such method is widely used on the Internet—to rate restaurants, movies, books, or other people’s comments or reviews, for example. You give numbers of stars or points to mark how much you like something. To convert this into an election method, count each candidate’s stars or points, and the winner is the one with the highest average score (or the highest total score, if voters are allowed to leave some candidates unrated). This is known as range voting, and it goes back to an idea considered by Laplace at the start of the nineteenth century. It also resembles ancient forms of acclamation in Sparta. The more you like something, the louder you bash your shield with your spear, and the biggest noise wins. A recent variant, developed by two mathematicians in Paris, Michel Balinski and Rida Laraki, uses familiar language rather than numbers for its rating scale. Voters are asked to grade each candidate as, for example, “Excellent,” “Very Good,” “Good,” “Insufficient,” or “Bad.” Judging politicians thus becomes like judging wines, except that you can drive afterward.
  • Range and approval voting deal neatly with the problem of vote-splitting: if a voter likes Nader best, and would rather have Gore than Bush, he or she can approve Nader and Gore but not Bush. Above all, their advocates say, both schemes give voters more options, and would elect the candidate with the most over-all support, rather than the one preferred by the largest minority. Both can be modified to deliver forms of proportional representation.
  • Whether such ideas can work depends on how people use them. If enough people are carelessly generous with their approval votes, for example, there could be some nasty surprises. In an unlikely set of circumstances, the candidate who is the favorite of more than half the voters could lose. Parties in an approval election might spend less time attacking their opponents, in order to pick up positive ratings from rivals’ supporters, and critics worry that it would favor bland politicians who don’t stand for anything much. Defenders insist that such a strategy would backfire in subsequent elections, if not before, and the case of Ronald Reagan suggests that broad appeal and strong views aren’t mutually exclusive.
  • Why are the effects of an unfamiliar electoral system so hard to puzzle out in advance? One reason is that political parties will change their campaign strategies, and voters the way they vote, to adapt to the new rules, and such variables put us in the realm of behavior and culture. Meanwhile, the technical debate about electoral systems generally takes place in a vacuum from which voters’ capriciousness and local circumstances have been pumped out. Although almost any alternative voting scheme now on offer is likely to be better than first past the post, it’s unrealistic to think that one voting method would work equally well for, say, the legislature of a young African republic, the Presidency of an island in Oceania, the school board of a New England town, and the assembly of a country still scarred by civil war. If winner takes all is a poor electoral system, one size fits all is a poor way to pick its replacements.
  • Mathematics can suggest what approaches are worth trying, but it can’t reveal what will suit a particular place, and best deliver what we want from a democratic voting system: to create a government that feels legitimate to people—to reconcile people to being governed, and give them reason to feel that, win or lose (especially lose), the game is fair.
  •  
    WIN OR LOSE No voting system is flawless. But some are less democratic than others. by Anthony Gottlieb
Weiye Loh

Russia and Belarus: It takes one to know one | The Economist - 0 views

  • RUSSIA and Belarus are unlikely champions of democracy and freedom of speech. But a postmodernist approach to politics can yield odd results in the post-Soviet world. In recent weeks these authoritarian regimes have denounced each other’s authoritarianism and deployed state-controlled media to attack each other’s lack of media freedom. Bizarrely, this war of words has been waged in the name of brotherly ties and economic union.
  •  
    Russia and Belarus It takes one to know one A media war of words breaks out between two supposed allies Jul 22nd 2010 | MOSCOW
Weiye Loh

Apples and PCs: Who innovates more, Apple or HP? | The Economist - 1 views

  • In terms of processing power, speed, memory, and so on, how do Macs and PCs actually compare? And does Apple innovate in terms of basic hardware quality as often or less often than the likes of HP, Compaq, and other producers? This question is of broader interest from an economist's point of view because it also has to do with the age-old question of whether competition or monopoly is a better spur to innovation. In a certain sense, Apple is a monopolist, and PC makers are in a more competitive market. (I say in a certain sense because obviously Macs and PCs are substitutes; it's just that they're more imperfect substitutes than two PCs are for each other, in part because of software migration issues.)
  • Schumpeter argued long back that because a monopolist reaps the full reward from innovation, such firms would be more innovative. The case for patents relies in part on a version of this argument: companies are given monopoly rights over a new product for a period of time in order for them to be able to recoup the costs of innovation; without such protection, it is argued, they would not find it beneficial to innovate in the first place.
  • others have argued that competition spurs innovation by giving firms a way to differentiate themselves from their competitors (in a way, creating something new gives a company a temporary, albeit brief, "monopoly")
  •  
    Who innovates more, Apple or HP?
Weiye Loh

Understanding the universe: Order of creation | The Economist - 0 views

  • In their “The Grand Design”, the authors discuss “M-theory”, a composite of various versions of cosmological “string” theory that was developed in the mid-1990s, and announce that, if it is confirmed by observation, “we will have found the grand design.” Yet this is another tease. Despite much talk of the universe appearing to be “fine-tuned” for human existence, the authors do not in fact think that it was in any sense designed. And once more we are told that we are on the brink of understanding everything.
  • The authors rather fancy themselves as philosophers, though they would presumably balk at the description, since they confidently assert on their first page that “philosophy is dead.” It is, allegedly, now the exclusive right of scientists to answer the three fundamental why-questions with which the authors purport to deal in their book. Why is there something rather than nothing? Why do we exist? And why this particular set of laws and not some other?
  • It is hard to evaluate their case against recent philosophy, because the only subsequent mention of it, after the announcement of its death, is, rather oddly, an approving reference to a philosopher’s analysis of the concept of a law of nature, which, they say, “is a more subtle question than one may at first think.” There are actually rather a lot of questions that are more subtle than the authors think. It soon becomes evident that Professor Hawking and Mr Mlodinow regard a philosophical problem as something you knock off over a quick cup of tea after you have run out of Sudoku puzzles.
  • ...2 more annotations...
  • The main novelty in “The Grand Design” is the authors’ application of a way of interpreting quantum mechanics, derived from the ideas of the late Richard Feynman, to the universe as a whole. According to this way of thinking, “the universe does not have just a single existence or history, but rather every possible version of the universe exists simultaneously.” The authors also assert that the world’s past did not unfold of its own accord, but that “we create history by our observation, rather than history creating us.” They say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors’ interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
  • Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think.
  •  
    Order of creation Even Stephen Hawking doesn't quite manage to explain why we are here
Weiye Loh

The fog of war: The fog of war | The Economist - 0 views

  •  
    The fog of war Apr 5th 2010, 22:54 by R.M. | NEW YORK
1 - 20 of 55 Next › Last »
Showing 20 items per page