Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Reasoning

Rss Feed Group items tagged

Weiye Loh

The Way We Live Now - Metric Mania - NYTimes.com - 0 views

  • In the realm of public policy, we live in an age of numbers.
  • do wehold an outsize belief in our ability to gauge complex phenomena, measure outcomes and come up with compelling numerical evidence? A well-known quotation usually attributed to Einstein is “Not everything that can be counted counts, and not everything that counts can be counted.” I’d amend it to a less eloquent, more prosaic statement: Unless we know how things are counted, we don’t know if it’s wise to count on the numbers.
  • The problem isn’t with statistical tests themselves but with what we do before and after we run them.
  • ...9 more annotations...
  • First, we count if we can, but counting depends a great deal on previous assumptions about categorization. Consider, for example, the number of homeless people in Philadelphia, or the number of battered women in Atlanta, or the number of suicides in Denver. Is someone homeless if he’s unemployed and living with his brother’s family temporarily? Do we require that a women self-identify as battered to count her as such? If a person starts drinking day in and day out after a cancer diagnosis and dies from acute cirrhosis, did he kill himself? The answers to such questions significantly affect the count.
  • Second, after we’ve gathered some numbers relating to a phenomenon, we must reasonably aggregate them into some sort of recommendation or ranking. This is not easy. By appropriate choices of criteria, measurement protocols and weights, almost any desired outcome can be reached.
  • Are there good reasons the authors picked the criteria they did? Why did they weigh the criteria in the way they did?
  • Since the answer to the last question is usually yes, the problem of reasonable aggregation is no idle matter.
  • These two basic procedures — counting and aggregating — have important implications for public policy. Consider the plan to evaluate the progress of New York City public schools inaugurated by the city a few years ago. While several criteria were used, much of a school’s grade was determined by whether students’ performance on standardized state tests showed annual improvement. This approach risked putting too much weight on essentially random fluctuations and induced schools to focus primarily on the topics on the tests. It also meant that the better schools could receive mediocre grades becausethey were already performing well and had little room for improvement. Conversely, poor schools could receive high grades by improving just a bit.
  • Medical researchers face similar problems when it comes to measuring effectiveness.
  • Suppose that whenever people contract the disease, they always get it in their mid-60s and live to the age of 75. In the first region, an early screening program detects such people in their 60s. Because these people live to age 75, the five-year survival rate is 100 percent. People in the second region are not screened and thus do not receive their diagnoses until symptoms develop in their early 70s, but they, too, die at 75, so their five-year survival rate is 0 percent. The laissez-faire approach thus yields the same results as the universal screening program, yet if five-year survival were the criterion for effectiveness, universal screening would be deemed the best practice.
  • Because so many criteria can be used to assess effectiveness — median or mean survival times, side effects, quality of life and the like — there is a case to be made against mandating that doctors follow what seems at any given time to be the best practice. Perhaps, as some have suggested, we should merely nudge them with gentle incentives. A comparable tentativeness may be appropriate when devising criteria for effective schools.
  • Arrow’s Theorem, a famous result in mathematical economics, essentially states that no voting system satisfying certain minimal conditions can be guaranteed to always yield a fair or reasonable aggregation of the voters’ rankings of several candidates. A squishier analogue for the field of social measurement would say something like this: No method of measuring a societal phenomenon satisfying certain minimal conditions exists that can’t be second-guessed, deconstructed, cheated, rejected or replaced. This doesn’t mean we shouldn’t be counting — but it does mean we should do so with as much care and wisdom as we can muster.
  •  
    THE WAY WE LIVE NOW Metric Mania
Weiye Loh

Straits Times Forum explains why it heavily edited letter | The Online Citizen - 0 views

  • 1. You stated we wrongly replaced the statistic you cited with another from Ms Rachel Chang’s article on March 8 (“School system still the ‘best way to move up’). Your original letter “It is indeed heartwarming to learn that 90% of children from one-to-three-room flats do not make it to university.” Reasons we edited it: Factual error, sense. There were two problems with your sentence. First, it was contradictory and didn’t make sense.Your original sentence cannot mean what it says unless you were elated over the fact that nine in 10 children from less well-off homes failed to qualify for university. So we edited it for sense, i.e., underscoring a positive feeling (heartwarming) with a positive fact; rather than the self-penned irony of a positive feeling (heartwarming) backed by a negative fact (90% failure rate to university admission by less well off children). That was why we replaced the original statistic with the only one in Ms Chang’s March 8 report that matched your elation, that is, that 50 percent of less well off children found tertiary success.
  • (Visa: Firstly, I find it hard to believe that nobody in the Straits Times office understands the meaning of sarcasm. Secondly, there was NO FACTUAL ERROR. Allow me to present to you the statistics, direct from The Straits Times themselves: http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf )
  • Second, we replaced your original statistic because it did not exist in Ms Chang’s March 8 front-page report. Ms Chang quoted that statistic in a later article (“Poor kids need aspiration: March 18; paragraph 5), which appeared after your letter was published. (Visa: It did not exist? Pay careful attention to the URL: http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf . Look at the number. 20110308. 2011 03 08. 8th March 2011.)
  • ...7 more annotations...
  • 2. Your original letter “His (Education Minister Dr Ng) statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination. “ Reason we edited it: Factual error
  • “His statement is backed by the statistic that about 50 per cent of children from the bottom third of the socio-economic bracket score within the top two-thirds of their Primary School Leaving Examination cohort. (Para 3 of Ms Chang’s March 8 report). (Visa:  THIS IS NOT A FACTUAL ERROR. If 50% of a group score in the top two-thirds, then the remaining 50% of the group, by simple process of elimination, must score in the bottom third!)
  • You can assume that the stats are wrong, but you CANNOT CHANGE it and CONTINUE to use the contributor’s name! Where is your journalist moral, ethic, and basic human decency? Since it is YOUR meaning, and not the writer’s, don’t it mean that you ABUSE, FABRICATE, and LIE to the public that that was by Samuel?
  • Either you print a news column or delete the letter. At least have some basic courtesy to call and ASK the writer for changes. Even a kid knows that its basic human decency to ask. HOW come you, as a grown man, YAP KOON HONG, can’t?
  • “So we edited it for sense ……. That was why we replaced the original statistic with the only one in Ms Chang’s March 8 report that matched your elation ……” and “So, we needed to provide the context to the minister’s statement in order to retain the sense of your meaning.” These are extraordinary statements. My understanding is that editors edit for clarity and brevity. It is extraordinary and perhaps only in Singapore that editors also edit for “sense”.
  • 50% make it to university therefore the other 50% did not make it. This kind of reasoning only works in primary or secondary school maths. In the real world, academia and journalism, the above would be considered a logical fallacy. To explain why, one must consider the fact that not going to university is not the same as “not making it”. World class musicians, sports, volunteer work, oversease universities, travel, these are just a few of the reasons why we can’t just do a simple calculation when it comes to statistics. Bill Gates didn’t go to university, would we classify him as “not making it” Sarcasm has no place in journalism as it relies on visual and vocal indicators to interpret. I live in Washington, and if the above letter was sent to any newspaper it would be thrown out with all the other garbage faster than you could say freedom of speech. At least the editor in question here bothered to try his best to get the letter published.
  • “we felt your opinion deserved publication” Please, Yap Koon Hong, what you published was the very opposite of his opinion! As you yourself admitted, Samuel’s letter was ironic in nature, but you removed all traces of irony and changed the statistics to fabricate a sense of “elation” that Samuel did not mean to convey!
Weiye Loh

When the scientific evidence is unwelcome, people try to reason it away | Ben Goldacre ... - 0 views

  • Each group found extensive methodological holes in the evidence they disagreed with, but ignored the very same holes in the evidence that reinforced their views.
  • Some people go even further than this when presented with unwelcome data, and decide that science itself is broken.
  •  
    When the scientific evidence is unwelcome, people try to reason it away Research results not consistent with your world view? Then you're likely to believe science can't supply all the answers
Weiye Loh

Stock and flow « Snarkmarket - 0 views

  • There are two kinds of quan­ti­ties in the world. Stock is a sta­tic value: money in the bank, or trees in the for­est. Flow is a rate of change: fif­teen dol­lars an hour, or three-thousand tooth­picks a day.
  • stock and flow is the mas­ter metaphor for media today. Here’s what I mean: Flow is the feed. It’s the posts and the tweets. It’s the stream of daily and sub-daily updates that remind peo­ple that you exist. Stock is the durable stuff. It’s the con­tent you pro­duce that’s as inter­est­ing in two months (or two years) as it is today. It’s what peo­ple dis­cover via search. It’s what spreads slowly but surely, build­ing fans over time.
  • I feel like flow is ascenascen­dant these days, for obvi­ous reasons—but we neglect stock at our own peril.
  • ...9 more annotations...
  • Flow is a tread­mill, and you can’t spend all of your time run­ning on the tread­mill. Well, you can. But then one day you’ll get off and look around and go: Oh man. I’ve got noth­ing here.
  • But I’m not say­ing you should ignore flow!
  • this is no time to hole up and work in iso­la­tion, emerg­ing after long months or years with your perfectly-polished opus. Every­body will go: huh?
  • if you don’t have flow to plug your new fans into, you’re suf­fer­ing a huge (here it is!) oppor­tu­nity cost. You’ll have to find them all again next time you emerge from your cave.
  • we all got really good at flow, really fast. But flow is ephemeral. Stock sticks around. Stock is cap­i­tal. Stock is protein.
  • And the real magic trick in 2010 is to put them both together. To keep the ball bounc­ing with your flow—to main­tain that open chan­nel of communication—while you work on some kick-ass stock in the back­ground.
  • all these super-successful artists and media peo­ple today who don’t really think about flow. Like, Wes Ander­son
  • the secret is that some­body else does his flow for him. I mean, what are PR and adver­tis­ing? Flow, bought and paid for.
  • Today I’m still always ask­ing myself: Is this stock? Is this flow? How’s my mix? Do I have enough of both?
  •  
    flow is ascen dant these days, for obvi ous reasons-but we neglect stock at our own peril.
Weiye Loh

New voting methods and fair elections : The New Yorker - 0 views

  • history of voting math comes mainly in two chunks: the period of the French Revolution, when some members of France’s Academy of Sciences tried to deduce a rational way of conducting elections, and the nineteen-fifties onward, when economists and game theorists set out to show that this was impossible
  • The first mathematical account of vote-splitting was given by Jean-Charles de Borda, a French mathematician and a naval hero of the American Revolutionary War. Borda concocted examples in which one knows the order in which each voter would rank the candidates in an election, and then showed how easily the will of the majority could be frustrated in an ordinary vote. Borda’s main suggestion was to require voters to rank candidates, rather than just choose one favorite, so that a winner could be calculated by counting points awarded according to the rankings. The key idea was to find a way of taking lower preferences, as well as first preferences, into account.Unfortunately, this method may fail to elect the majority’s favorite—it could, in theory, elect someone who was nobody’s favorite. It is also easy to manipulate by strategic voting.
  • If the candidate who is your second preference is a strong challenger to your first preference, you may be able to help your favorite by putting the challenger last. Borda’s response was to say that his system was intended only for honest men.
  • ...15 more annotations...
  • After the Academy dropped Borda’s method, it plumped for a simple suggestion by the astronomer and mathematician Pierre-Simon Laplace, who was an important contributor to the theory of probability. Laplace’s rule insisted on an over-all majority: at least half the votes plus one. If no candidate achieved this, nobody was elected to the Academy.
  • Another early advocate of proportional representation was John Stuart Mill, who, in 1861, wrote about the critical distinction between “government of the whole people by the whole people, equally represented,” which was the ideal, and “government of the whole people by a mere majority of the people exclusively represented,” which is what winner-takes-all elections produce. (The minority that Mill was most concerned to protect was the “superior intellects and characters,” who he feared would be swamped as more citizens got the vote.)
  • The key to proportional representation is to enlarge constituencies so that more than one winner is elected in each, and then try to align the share of seats won by a party with the share of votes it receives. These days, a few small countries, including Israel and the Netherlands, treat their entire populations as single constituencies, and thereby get almost perfectly proportional representation. Some places require a party to cross a certain threshold of votes before it gets any seats, in order to filter out extremists.
  • The main criticisms of proportional representation are that it can lead to unstable coalition governments, because more parties are successful in elections, and that it can weaken the local ties between electors and their representatives. Conveniently for its critics, and for its defenders, there are so many flavors of proportional representation around the globe that you can usually find an example of whatever point you want to make. Still, more than three-quarters of the world’s rich countries seem to manage with such schemes.
  • The alternative voting method that will be put to a referendum in Britain is not proportional representation: it would elect a single winner in each constituency, and thus steer clear of what foreigners put up with. Known in the United States as instant-runoff voting, the method was developed around 1870 by William Ware
  • In instant-runoff elections, voters rank all or some of the candidates in order of preference, and votes may be transferred between candidates. The idea is that your vote may count even if your favorite loses. If any candidate gets more than half of all the first-preference votes, he or she wins, and the game is over. But, if there is no majority winner, the candidate with the fewest first-preference votes is eliminated. Then the second-preference votes of his or her supporters are distributed to the other candidates. If there is still nobody with more than half the votes, another candidate is eliminated, and the process is repeated until either someone has a majority or there are only two candidates left, in which case the one with the most votes wins. Third, fourth, and lower preferences will be redistributed if a voter’s higher preferences have already been transferred to candidates who were eliminated earlier.
  • At first glance, this is an appealing approach: it is guaranteed to produce a clear winner, and more voters will have a say in the election’s outcome. Look more closely, though, and you start to see how peculiar the logic behind it is. Although more people’s votes contribute to the result, they do so in strange ways. Some people’s second, third, or even lower preferences count for as much as other people’s first preferences. If you back the loser of the first tally, then in the subsequent tallies your second (and maybe lower) preferences will be added to that candidate’s first preferences. The winner’s pile of votes may well be a jumble of first, second, and third preferences.
  • Such transferrable-vote elections can behave in topsy-turvy ways: they are what mathematicians call “non-monotonic,” which means that something can go up when it should go down, or vice versa. Whether a candidate who gets through the first round of counting will ultimately be elected may depend on which of his rivals he has to face in subsequent rounds, and some votes for a weaker challenger may do a candidate more good than a vote for that candidate himself. In short, a candidate may lose if certain voters back him, and would have won if they hadn’t. Supporters of instant-runoff voting say that the problem is much too rare to worry about in real elections, but recent work by Robert Norman, a mathematician at Dartmouth, suggests otherwise. By Norman’s calculations, it would happen in one in five close contests among three candidates who each have between twenty-five and forty per cent of first-preference votes. With larger numbers of candidates, it would happen even more often. It’s rarely possible to tell whether past instant-runoff elections have gone topsy-turvy in this way, because full ballot data aren’t usually published. But, in Burlington’s 2006 and 2009 mayoral elections, the data were published, and the 2009 election did go topsy-turvy.
  • Kenneth Arrow, an economist at Stanford, examined a set of requirements that you’d think any reasonable voting system could satisfy, and proved that nothing can meet them all when there are more than two candidates. So designing elections is always a matter of choosing a lesser evil. When the Royal Swedish Academy of Sciences awarded Arrow a Nobel Prize, in 1972, it called his result “a rather discouraging one, as regards the dream of a perfect democracy.” Szpiro goes so far as to write that “the democratic world would never be the same again,
  • There is something of a loophole in Arrow’s demonstration. His proof applies only when voters rank candidates; it would not apply if, instead, they rated candidates by giving them grades. First-past-the-post voting is, in effect, a crude ranking method in which voters put one candidate in first place and everyone else last. Similarly, in the standard forms of proportional representation voters rank one party or group of candidates first, and all other parties and candidates last. With rating methods, on the other hand, voters would give all or some candidates a score, to say how much they like them. They would not have to say which is their favorite—though they could in effect do so, by giving only him or her their highest score—and they would not have to decide on an order of preference for the other candidates.
  • One such method is widely used on the Internet—to rate restaurants, movies, books, or other people’s comments or reviews, for example. You give numbers of stars or points to mark how much you like something. To convert this into an election method, count each candidate’s stars or points, and the winner is the one with the highest average score (or the highest total score, if voters are allowed to leave some candidates unrated). This is known as range voting, and it goes back to an idea considered by Laplace at the start of the nineteenth century. It also resembles ancient forms of acclamation in Sparta. The more you like something, the louder you bash your shield with your spear, and the biggest noise wins. A recent variant, developed by two mathematicians in Paris, Michel Balinski and Rida Laraki, uses familiar language rather than numbers for its rating scale. Voters are asked to grade each candidate as, for example, “Excellent,” “Very Good,” “Good,” “Insufficient,” or “Bad.” Judging politicians thus becomes like judging wines, except that you can drive afterward.
  • Range and approval voting deal neatly with the problem of vote-splitting: if a voter likes Nader best, and would rather have Gore than Bush, he or she can approve Nader and Gore but not Bush. Above all, their advocates say, both schemes give voters more options, and would elect the candidate with the most over-all support, rather than the one preferred by the largest minority. Both can be modified to deliver forms of proportional representation.
  • Whether such ideas can work depends on how people use them. If enough people are carelessly generous with their approval votes, for example, there could be some nasty surprises. In an unlikely set of circumstances, the candidate who is the favorite of more than half the voters could lose. Parties in an approval election might spend less time attacking their opponents, in order to pick up positive ratings from rivals’ supporters, and critics worry that it would favor bland politicians who don’t stand for anything much. Defenders insist that such a strategy would backfire in subsequent elections, if not before, and the case of Ronald Reagan suggests that broad appeal and strong views aren’t mutually exclusive.
  • Why are the effects of an unfamiliar electoral system so hard to puzzle out in advance? One reason is that political parties will change their campaign strategies, and voters the way they vote, to adapt to the new rules, and such variables put us in the realm of behavior and culture. Meanwhile, the technical debate about electoral systems generally takes place in a vacuum from which voters’ capriciousness and local circumstances have been pumped out. Although almost any alternative voting scheme now on offer is likely to be better than first past the post, it’s unrealistic to think that one voting method would work equally well for, say, the legislature of a young African republic, the Presidency of an island in Oceania, the school board of a New England town, and the assembly of a country still scarred by civil war. If winner takes all is a poor electoral system, one size fits all is a poor way to pick its replacements.
  • Mathematics can suggest what approaches are worth trying, but it can’t reveal what will suit a particular place, and best deliver what we want from a democratic voting system: to create a government that feels legitimate to people—to reconcile people to being governed, and give them reason to feel that, win or lose (especially lose), the game is fair.
  •  
    WIN OR LOSE No voting system is flawless. But some are less democratic than others. by Anthony Gottlieb
Weiye Loh

CancerGuide: The Median Isn't the Message - 0 views

  • Statistics recognizes different measures of an "average," or central tendency. The mean is our usual concept of an overall average - add up the items and divide them by the number of sharers
  • The median, a different measure of central tendency, is the half-way point.
  • A politician in power might say with pride, "The mean income of our citizens is $15,000 per year." The leader of the opposition might retort, "But half our citizens make less than $10,000 per year." Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean; but he can balance only one mendicant in calculating a median).
  • ...7 more annotations...
  • The larger issue that creates a common distrust or contempt for statistics is more troubling. Many people make an unfortunate and invalid separation between heart and mind, or feeling and intellect. In some contemporary traditions, abetted by attitudes stereotypically centered on Southern California, feelings are exalted as more "real" and the only proper basis for action - if it feels good, do it - while intellect gets short shrift as a hang-up of outmoded elitism. Statistics, in this absurd dichotomy, often become the symbol of the enemy. As Hilaire Belloc wrote, "Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death."
  • This is a personal story of statistics, properly interpreted, as profoundly nurturant and life-giving. It declares holy war on the downgrading of intellect by telling a small story about the utility of dry, academic knowledge about science. Heart and head are focal points of one body, one personality.
  • We still carry the historical baggage of a Platonic heritage that seeks sharp essences and definite boundaries. (Thus we hope to find an unambiguous "beginning of life" or "definition of death," although nature often comes to us as irreducible continua.) This Platonic heritage, with its emphasis in clear distinctions and separated immutable entities, leads us to view statistical measures of central tendency wrongly, indeed opposite to the appropriate interpretation in our actual world of variation, shadings, and continua. In short, we view means and medians as the hard "realities," and the variation that permits their calculation as a set of transient and imperfect measurements of this hidden essence. If the median is the reality and variation around the median just a device for its calculation, the "I will probably be dead in eight months" may pass as a reasonable interpretation.
  • But all evolutionary biologists know that variation itself is nature's only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently - and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation. When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good. I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation's best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.
  • Another technical point then added even more solace. I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call "right skewed." (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right. In skewed distributions, variation to one side of the central tendency is more stretched out - left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned. After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn't much room for the distribution's lower (or left) half - it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives. The distribution must be right skewed, and I needed to know how long the extended tail ran - for I had already concluded that my favorable profile made me a good candidate for that part of the curve.
  • The distribution was indeed, strongly right skewed, with a long tail (however small) that extended for several years above the eight month median. I saw no reason why I shouldn't be in that small tail, and I breathed a very long sigh of relief. My technical knowledge had helped. I had read the graph correctly. I had asked the right question and found the answers. I had obtained, in all probability, the most precious of all possible gifts in the circumstances - substantial time.
  • One final point about statistical distributions. They apply only to a prescribed set of circumstances - in this case to survival with mesothelioma under conventional modes of treatment. If circumstances change, the distribution may alter. I was placed on an experimental protocol of treatment and, if fortune holds, will be in the first cohort of a new distribution with high median and a right tail extending to death by natural causes at advanced old age.
  •  
    The Median Isn't the Message by Stephen Jay Gould
Weiye Loh

Mystery and Evidence - NYTimes.com - 0 views

  • a very natural way for atheists to react to religious claims: to ask for evidence, and reject these claims in the absence of it. Many of the several hundred comments that followed two earlier Stone posts “Philosophy and Faith” and “On Dawkins’s Atheism: A Response,” both by Gary Gutting, took this stance. Certainly this is the way that today’s “new atheists”  tend to approach religion. According to their view, religions — by this they mean basically Christianity, Judaism and Islam and I will follow them in this — are largely in the business of making claims about the universe that are a bit like scientific hypotheses. In other words, they are claims — like the claim that God created the world — that are supported by evidence, that are proved by arguments and tested against our experience of the world. And against the evidence, these hypotheses do not seem to fare well.
  • But is this the right way to think about religion? Here I want to suggest that it is not, and to try and locate what seem to me some significant differences between science and religion
  • To begin with, scientific explanation is a very specific and technical kind of knowledge. It requires patience, pedantry, a narrowing of focus and (in the case of the most profound scientific theories) considerable mathematical knowledge and ability. No-one can understand quantum theory — by any account, the most successful physical theory there has ever been — unless they grasp the underlying mathematics. Anyone who says otherwise is fooling themselves.
  • ...16 more annotations...
  • Religious belief is a very different kind of thing. It is not restricted only to those with a certain education or knowledge, it does not require years of training, it is not specialized and it is not technical. (I’m talking here about the content of what people who regularly attend church, mosque or synagogue take themselves to be thinking; I’m not talking about how theologians interpret this content.)
  • while religious belief is widespread, scientific knowledge is not. I would guess that very few people in the world are actually interested in the details of contemporary scientific theories. Why? One obvious reason is that many lack access to this knowledge. Another reason is that even when they have access, these theories require sophisticated knowledge and abilities, which not everyone is capable of getting.
  • most people aren’t deeply interested in science, even when they have the opportunity and the basic intellectual capacity to learn about it. Of course, educated people who know about science know roughly what Einstein, Newton and Darwin said. Many educated people accept the modern scientific view of the world and understand its main outlines. But this is not the same as being interested in the details of science, or being immersed in scientific thinking.
  • This lack of interest in science contrasts sharply with the worldwide interest in religion. It’s hard to say whether religion is in decline or growing, partly because it’s hard to identify only one thing as religion — not a question I can address here. But it’s pretty obvious that whatever it is, religion commands and absorbs the passions and intellects of hundreds of millions of people, many more people than science does. Why is this? Is it because — as the new atheists might argue — they want to explain the world in a scientific kind of way, but since they have not been properly educated they haven’t quite got there yet? Or is it because so many people are incurably irrational and are incapable of scientific thinking? Or is something else going on?
  • Some philosophers have said that religion is so unlike science that it has its own “grammar” or “logic” and should not be held accountable to the same standards as scientific or ordinary empirical belief. When Christians express their belief that “Christ has risen,” for example, they should not be taken as making a factual claim, but as expressing their commitment to what Wittgenstein called a certain “form of life,” a way of seeing significance in the world, a moral and practical outlook which is worlds away from scientific explanation.
  • This view has some merits, as we shall see, but it grossly misrepresents some central phenomena of religion. It is absolutely essential to religions that they make certain factual or historical claims. When Saint Paul says “if Christ is not risen, then our preaching is in vain and our faith is in vain” he is saying that the point of his faith depends on a certain historical occurrence.
  • Theologians will debate exactly what it means to claim that Christ has risen, what exactly the meaning and significance of this occurrence is, and will give more or less sophisticated accounts of it. But all I am saying is that whatever its specific nature, Christians must hold that there was such an occurrence. Christianity does make factual, historical claims. But this is not the same as being a kind of proto-science. This will become clear if we reflect a bit on what science involves.
  • The essence of science involves making hypotheses about the causes and natures of things, in order to explain the phenomena we observe around us, and to predict their future behavior. Some sciences — medical science, for example — make hypotheses about the causes of diseases and test them by intervening. Others — cosmology, for example — make hypotheses that are more remote from everyday causes, and involve a high level of mathematical abstraction and idealization. Scientific reasoning involves an obligation to hold a hypothesis only to the extent that the evidence requires it. Scientists should not accept hypotheses which are “ad hoc” — that is, just tailored for one specific situation but cannot be generalized to others. Most scientific theories involve some kind of generalization: they don’t just make claims about one thing, but about things of a general kind. And their hypotheses are designed, on the whole, to make predictions; and if these predictions don’t come out true, then this is something for the scientists to worry about.
  • Religions do not construct hypotheses in this sense. I said above that Christianity rests upon certain historical claims, like the claim of the resurrection. But this is not enough to make scientific hypotheses central to Christianity, any more than it makes such hypotheses central to history. It is true, as I have just said, that Christianity does place certain historical events at the heart of their conception of the world, and to that extent, one cannot be a Christian unless one believes that these events happened. Speaking for myself, it is because I reject the factual basis of the central Christian doctrines that I consider myself an atheist. But I do not reject these claims because I think they are bad hypotheses in the scientific sense. Not all factual claims are scientific hypotheses. So I disagree with Richard Dawkins when he says “religions make existence claims, and this means scientific claims.”
  • Taken as hypotheses, religious claims do very badly: they are ad hoc, they are arbitrary, they rarely make predictions and when they do they almost never come true. Yet the striking fact is that it does not worry Christians when this happens. In the gospels Jesus predicts the end of the world and the coming of the kingdom of God. It does not worry believers that Jesus was wrong (even if it causes theologians to reinterpret what is meant by ‘the kingdom of God’). If Jesus was framing something like a scientific hypothesis, then it should worry them. Critics of religion might say that this just shows the manifest irrationality of religion. But what it suggests to me is that that something else is going on, other than hypothesis formation.
  • Religious belief tolerates a high degree of mystery and ignorance in its understanding of the world. When the devout pray, and their prayers are not answered, they do not take this as evidence which has to be weighed alongside all the other evidence that prayer is effective. They feel no obligation whatsoever to weigh the evidence. If God does not answer their prayers, well, there must be some explanation of this, even though we may never know it. Why do people suffer if an omnipotent God loves them? Many complex answers have been offered, but in the end they come down to this: it’s a mystery.
  • Science too has its share of mysteries (or rather: things that must simply be accepted without further explanation). But one aim of science is to minimize such things, to reduce the number of primitive concepts or primitive explanations. The religious attitude is very different. It does not seek to minimize mystery. Mysteries are accepted as a consequence of what, for the religious, makes the world meaningful.
  • Religion is an attempt to make sense of the world, but it does not try and do this in the way science does. Science makes sense of the world by showing how things conform to its hypotheses. The characteristic mode of scientific explanation is showing how events fit into a general pattern.
  • Religion, on the other hand, attempts to make sense of the world by seeing a kind of meaning or significance in things. This kind of significance does not need laws or generalizations, but just the sense that the everyday world we experience is not all there is, and that behind it all is the mystery of God’s presence. The believer is already convinced that God is present in everything, even if they cannot explain this or support it with evidence. But it makes sense of their life by suffusing it with meaning. This is the attitude (seeing God in everything) expressed in George Herbert’s poem, “The Elixir.” Equipped with this attitude, even the most miserable tasks can come to have value: Who sweeps a room as for Thy laws/ Makes that and th’ action fine.
  • None of these remarks are intended as being for or against religion. Rather, they are part of an attempt (by an atheist, from the outside) to understand what it is. Those who criticize religion should have an accurate understanding of what it is they are criticizing. But to understand a world view, or a philosophy or system of thought, it is not enough just to understand the propositions it contains. You also have to understand what is central and what is peripheral to the view. Religions do make factual and historical claims, and if these claims are false, then the religions fail. But this dependence on fact does not make religious claims anything like hypotheses in the scientific sense. Hypotheses are not central. Rather, what is central is the commitment to the meaningfulness (and therefore the mystery) of the world.
  • while religious thinking is widespread in the world, scientific thinking is not. I don’t think that this can be accounted for merely in terms of the ignorance or irrationality of human beings. Rather, it is because of the kind of intellectual, emotional and practical appeal that religion has for people, which is a very different appeal from the kind of appeal that science has. Stephen Jay Gould once argued that religion and science are “non-overlapping magisteria.” If he meant by this that religion makes no factual claims which can be refuted by empirical investigations, then he was wrong. But if he meant that religion and science are very different kinds of attempt to understand the world, then he was certainly right.
  •  
    Mystery and Evidence By TIM CRANE
Weiye Loh

Skepticblog » Investing in Basic Science - 0 views

  • A recent editorial in the New York Times by Nicholas Wade raises some interesting points about the nature of basic science research – primarily that its’ risky.
  • As I have pointed out about the medical literature, researcher John Ioaniddis has explained why most published studies turn out in retrospect to be wrong. The same is true of most basic science research – and the underlying reason is the same. The world is complex, and most of our guesses about how it might work turn out to be either flat-out wrong, incomplete, or superficial. And so most of our probing and prodding of the natural world, looking for the path to the actual answer, turn out to miss the target.
  • research costs considerable resources of time, space, money, opportunity, and people-hours. There may also be some risk involved (such as to subjects in the clinical trial). Further, negative studies are actually valuable (more so than terrible pictures). They still teach us something about the world – they teach us what is not true. At the very least this narrows the field of possibilities. But the analogy holds in so far as the goal of scientific research is to improve our understanding of the world and to provide practical applications that make our lives better. Wade writes mostly about how we fund research, and this relates to our objectives. Most of the corporate research money is interested in the latter – practical (and profitable) applications. If this is your goal, than basic science research is a bad bet. Most investments will be losers, and for most companies this will not be offset by the big payoffs of the rare winners. So many companies will allow others to do the basic science (government, universities, start up companies) then raid the winners by using their resources to buy them out, and then bring them the final steps to a marketable application. There is nothing wrong or unethical about this. It’s a good business model.
  • ...8 more annotations...
  • What, then, is the role of public (government) funding of research? Primarily, Wade argues (and I agree), to provide infrastructure for expensive research programs, such as building large colliders.
  • the more the government invests in basic science and infrastructure, the more winners will emerge that private industry can then capitalize on. This is a good way to build a competitive dynamic economy.
  • But there is a pitfall – prematurely picking winners and losers. Wade give the example of California investing specifically into developing stem cell treatments. He argues that stem cells, while promising, do not hold a guarantee of eventual success, and perhaps there are other technologies that will work and are being neglected. The history of science and technology has clearly demonstrated that it is wickedly difficult to predict the future (and all those who try are destined to be mocked by future generations with the benefit of perfect hindsight). Prematurely committing to one technology therefore contains a high risk of wasting a great deal of limited resources, and missing other perhaps more fruitful opportunities.
  • The underlying concept is that science research is a long-term game. Many avenues of research will not pan out, and those that do will take time to inspire specific applications. The media, however, likes catchy headlines. That means when they are reporting on basic science research journalists ask themselves – why should people care? What is the application of this that the average person can relate to? This seems reasonable from a journalistic point of view, but with basic science reporting it leads to wild speculation about a distant possible future application. The public is then left with the impression that we are on the verge of curing the common cold or cancer, or developing invisibility cloaks or flying cars, or replacing organs and having household robot servants. When a few years go by and we don’t have our personal android butlers, the public then thinks that the basic science was a bust, when in fact there was never a reasonable expectation that it would lead to a specific application anytime soon. But it still may be on track for interesting applications in a decade or two.
  • this also means that the government, generally, should not be in the game of picking winners an losers – putting their thumb on the scale, as it were. Rather, they will get the most bang for the research buck if they simply invest in science infrastructure, and also fund scientists in broad areas.
  • The same is true of technology – don’t pick winners and losers. The much-hyped “hydrogen economy” comes to mind. Let industry and the free market sort out what will work. If you have to invest in infrastructure before a technology is mature, then at least hedge your bets and keep funding flexible. Fund “alternative fuel” as a general category, and reassess on a regular basis how funds should be allocated. But don’t get too specific.
  • Funding research but leaving the details to scientists may be optimal
  • The scientific community can do their part by getting better at communicating with the media and the public. Try to avoid the temptation to overhype your own research, just because it is the most interesting thing in the world to you personally and you feel hype will help your funding. Don’t make it easy for the media to sensationalize your research – you should be the ones trying to hold back the reigns. Perhaps this is too much to hope for – market forces conspire too much to promote sensationalism.
Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
Weiye Loh

Roger Pielke Jr.'s Blog: Intolerance: Virtue or Anti-Science "Doublespeak"? - 0 views

  • John Beddington, the Chief Scientific Advisor to the UK government, has identified a need to be "grossly intolerant" of certain views that get in the way of dealing with important policy problems: We are grossly intolerant, and properly so, of racism. We are grossly intolerant, and properly so, of people who [are] anti-homosexuality... We are not—and I genuinely think we should think about how we do this—grossly intolerant of pseudo-science, the building up of what purports to be science by the cherry-picking of the facts and the failure to use scientific evidence and the failure to use scientific method. One way is to be completely intolerant of this nonsense. That we don't kind of shrug it off. We don't say: ‘oh, it's the media’ or ‘oh they would say that wouldn’t they?’ I think we really need, as a scientific community—and this is a very important scientific community—to think about how we do it.
  • Fortunately, Andrew Stirling, research director of the Science Policy Research Unit (which these days I think just goes by SPRU) at the University of Sussex, provides a much healthier perspective: What is this 'pseudoscience'? For Beddington, this seems to include any kind of criticism from non-scientists of new technologies like genetically modified organisms, much advocacy of the 'precautionary principle' in environmental protection, or suggestions that science itself might also legitimately be subjected to moral considerations. Who does Beddington hold to blame for this "politically or morally or religiously motivated nonsense"? For anyone who really values the central principles of science itself, the answer is quite shocking. He is targeting effectively anyone expressing "scepticism" over what he holds to be 'scientific' pronouncements—whether on GM, climate change or any other issue. Note, it is not irrational "denial" on which Beddington is calling for 'gross intolerance', but the eminently reasonable quality of "scepticism"! The alarming contradiction here is that organised, reasoned, scepticism—accepting rational argument from any quarter without favour for social status, cultural affiliations  or institutional prestige—is arguably the most precious and fundamental quality that science itself has (imperfectly) to offer. Without this enlightening aspiration, history shows how society is otherwise all-too-easily shackled by the doctrinal intolerance, intellectual blinkers and authoritarian suppression of criticism so familiar in religious, political, cultural and media institutions.
  • tirling concludes: [T]he basic aspirational principles of science offer the best means to challenge the ubiquitously human distorting pressures of self-serving privilege, hubris, prejudice and power. Among these principles are exactly the scepticism and tolerance against which Beddington is railing (ironically) so emotionally! Of course, scientific practices like peer review, open publication and acknowledgement of uncertainty all help reinforce the positive impacts of these underlying qualities. But, in the real world, any rational observer has to note that these practices are themselves imperfect. Although rarely achieved, it is inspirational ideals of universal, communitarian scepticism—guided by progressive principles of reasoned argument, integrity, pluralism, openness and, of course, empirical experiment—that best embody the great civilising potential of science itself. As the motto of none other than the Royal Society loosely enjoins (also sometimes somewhat ironically) "take nothing on authority". In this colourful instance of straight talking then, John Beddington is himself coming uncomfortably close to a particularly unsettling form of unscientific—even (in a deep sense) anti-scientific—'double speak'.
  • ...1 more annotation...
  • Anyone who really values the progressive civilising potential of science should argue (in a qualified way as here) against Beddington's intemperate call for "complete intolerance" of scepticism. It is the social and human realities shared by politicians, non-government organisations, journalists and scientists themselves, that make tolerance of scepticism so important. The priorities pursued in scientific research and the directions taken by technology are all as fundamentally political as other areas of policy. No matter how uncomfortable and messy the resulting debates may sometimes become, we should never be cowed by any special interest—including that of scientific institutions—away from debating these issues in open, rational, democratic ways. To allow this to happen would be to undermine science itself in the most profound sense. It is the upholding of an often imperfect pursuit of scepticism and tolerance that offer the best way to respect and promote science. Such a position is, indeed, much more in keeping with the otherwise-exemplary work of John Beddington himself.Stirling's eloquent response provides a nice tonic to Beddington's unsettling remarks. Nonetheless, Beddington's perspective should be taken as a clear warning as to the pathological state of highly politicized science these days.
Weiye Loh

Libel Chill and Me « Skepticism « Critical Thinking « Skeptic North - 0 views

  • Skeptics may by now be very familiar with recent attempts in Canada to ban wifi from public schools and libraries.  In short: there is no valid scientific reason to be worried about wifi.  It has also been revealed that the chief scientists pushing the wifi bans have been relying on poor data and even poorer studies.  By far the vast majority of scientific data that currently exists supports the conclusion that wifi and cell phone signals are perfectly safe.
  • So I wrote about that particular topic in the summer.  It got some decent coverage, but the fear mongering continued. I wrote another piece after I did a little digging into one of the main players behind this, one Rodney Palmer, and I discovered some decidedly pseudo-scientific tendencies in his past, as well as some undisclosed collusion.
  • One night I came home after a long day at work, a long commute, and a phone call that a beloved family pet was dying, and will soon be in significant pain.  That is the state I was in when I read the news about Palmer and Parliamentary committee.
  • ...18 more annotations...
  • That’s when I wrote my last significant piece for Skeptic North.  Titled, “Rodney Palmer: When Pseudoscience and Narcissism Collide,” it was a fiery take-down of every claim I heard Palmer speak before the committee, as well as reiterating some of his undisclosed collusion, unethical media tactics, and some reasons why he should not be considered an expert.
  • This time, the article got a lot more reader eyeballs than anything I had ever written for this blog (or my own) and it also caught the attention of someone on a school board which was poised to vote on wifi.  In these regards: Mission very accomplished.  I finally thought that I might be able to see some people in the media start to look at Palmer’s claims with a more critical eye than they had been previously, and I was flattered at the mountain of kind words, re-tweets, reddit comments and Facebook “likes.”
  • The comments section was mostly supportive of my article, and they were one of the few things that kept me from hiding in a hole for six weeks.  There were a few comments in opposition to what I wrote, some sensible, most incoherent rambling (one commenter, when asked for evidence, actually linked to a YouTube video which they referred to as “peer reviewed”)
  • One commenter was none other than the titular subject of the post, Rodney Palmer himself.  Here is a screen shot of what he said: Screen shot of the Libel/Slander threat.
  • Knowing full well the story of the libel threat against Simon Singh, I’ve always thought that if ever a threat like that came my way, I’d happily beat it back with the righteous fury and good humour of a person with the facts on their side.  After all, if I’m wrong, you’d be able to prove me wrong, rather than try to shut me up with a threat of a lawsuit.  Indeed, I’ve been through a similar situation once before, so I should be an old hat at this! Let me tell you friends, it’s not that easy.  In fact, it’s awful.  Outside observers could easily identify that Palmer had no case against me, but that was still cold comfort to me.  It is a very stressful situation to find yourself in.
  • The state of libel and slander laws in this country are such that a person can threaten a lawsuit without actually threatening a lawsuit.  There is no need to hire a lawyer to investigate the claims, look into who I am, where I live, where I work, and issue a carefully worded threatening letter demanding compliance.  All a person has to say is some version of  “Libel.  Slander.  Hmmmm….,” and that’s enough to spook a lot of people into backing off. It’s a modern day bogeyman.  They don’t have to prove it.  They don’t have to act on it.  A person or organization just has to say “BOO!” with sufficient seriousness, and unless you’ve got a good deal of editorial and financial support, discussion goes out the window. Libel Chill refers to the ‘chilling effect’ that the possibility of a libel/slander lawsuit has.  If a person is scared they might get sued, then they won’t even comment on a piece at all.  In my case, I had already commented three times on the wifi scaremongering, but this bogus threat against me was surely a major contributing factor to my not commenting again.
  • I ceased to discuss anything in the comment thread of the original article, and even shied away from other comment threads, calling me out.  I learned a great deal about the wifi/EMF issue since I wrote the article, but I did not comment on any of it, because I knew that Palmer and his supporters were watching me like a hawk (sorry to stretch the simile), and would likely try to silence me again.  I couldn’t risk a lawsuit.  Even though I knew there was no case against me, I couldn’t afford a lawyer just to prove that I didn’t do anything illegal.
  • The Libel and Slanders Act of Ontario, 1990 hasn’t really caught up with the internet.  There isn’t a clear precedent that defines a blog post, Twitter feed or Facebook post as falling under the umbrella of “broadcast,” which is what the bill addresses.  If I had written the original article in print, Palmer would have had six weeks to file suit against me.  But the internet is only kind of considered ‘broadcast.’  So it could be just six weeks, but he could also have up to two years to act and get a lawyer after me.  Truth is, there’s not a clear demarcation point for our Canadian legal system.
  • Libel laws in Canada are somewhere in between the Plaintiff-favoured UK system, and the Defendant-favoured US system.  On the one hand, if Palmer chose to incur the expense and time to hire a lawyer and file suit against me, the burden of proof would be on me to prove that I did not act with malice.  Easy peasy.  On the other hand, I would have a strong case that I acted in the best interests of Canadians, which would fall under the recent Supreme Court of Canada decision on protecting what has been termed, “Responsible Communication.”  The Supreme Court of Canada decision does not grant bloggers immunity from libel and slander suits, but it is a healthy dose of welcome freedom to discuss issues of importance to Canadians.
  • Palmer himself did not specify anything against me in his threat.  There was nothing particular that he complained about, he just said a version of “Libel and Slander!” at me.  He may as well have said “Boo!”
  • This is not a DBAD discussion (although I wholeheartedly agree with Phil Plait there). 
  • If you’d like to boil my lessons down to an acronym, I suppose the best one would be DBRBC: Don’t be reckless. Be Careful.
  • I wrote a piece that, although it was not incorrect in any measurable way, was written with fire and brimstone, piss and vinegar.  I stand by my piece, but I caution others to be a little more careful with the language they use.  Not because I think it is any less or more tactically advantageous (because I’m not sure anyone can conclusively demonstrate that being an aggressive jerk is an inherently better or worse communication tool), but because the risks aren’t always worth it.
  • I’m not saying don’t go after a person.  There are egomaniacs out there who deserve to be called out and taken down (verbally, of course).  But be very careful with what you say.
  • ask yourself some questions first: 1) What goal(s) are you trying to accomplish with this piece? Are you trying to convince people that there is a scientific misunderstanding here?  Are you trying to attract the attention of the mainstream media to a particular facet of the issue?  Are you really just pissed off and want to vent a little bit?  Is this article a catharsis, or is it communicative?  Be brutally honest with your intentions, it’s not as easy as you think.  Venting is okay.  So is vicious venting, but be careful what you dress it up as.
  • 2) In order to attain your goals, did you use data, or personalities?  If the former, are you citing the best, most current data you have available to you? Have you made a reasonable effort to check your data against any conflicting data that might be out there? If the latter, are you providing a mountain of evidence, and not just projecting onto personalities?  There is nothing inherently immoral or incorrect with going after the personalities.  But it is a very risky undertaking. You have to be damn sure you know what you’re talking about, and damn ready to defend yourself.  If you’re even a little loose with your claims, you will be called out for it, and a legal threat is very serious and stressful. So if you’re going after a personality, is it worth it?
  • 3) Are you letting the science speak for itself?  Are you editorializing?  Are you pointing out what part of your piece is data and what part is your opinion?
  • 4) If this piece was written in anger, frustration, or otherwise motivated by a powerful emotion, take a day.  Let your anger subside.  It will.  There are many cathartic enterprises out there, and you don’t need to react to the first one that comes your way.  Let someone else read your work before you share it with the internet.  Cooler heads definitely do think more clearly.
Weiye Loh

Rationally Speaking: On ethics, part III: Deontology - 0 views

  • Plato showed convincingly in his Euthyphro dialogue that even if gods existed they would not help at all settling the question of morality.
  • Broadly speaking, deontological approaches fall into the same category as consequentialism — they are concerned with what we ought to do, as opposed to what sort of persons we ought to be (the latter is, most famously, the concern of virtue ethics). That said, deontology is the chief rival of consequentialism, and the two have distinct advantages and disadvantages that seem so irreducible
  • Here is one way to understand the difference between consequentialism and deontology: for the former the consequences of an action are moral if they increase the Good (which, as we have seen, can be specified in different ways, including increasing happiness and/or decreasing pain). For the latter, the fundamental criterion is conformity to moral duties. You could say that for the deontologist the Right (sometimes) trumps the Good. Of course, as a result consequentialists have to go through the trouble of defining and justifying the Good, while deontologists have to tackle the task of defining and justifying the Right.
  • ...10 more annotations...
  • two major “modes” of deontology: agent-centered and victim-centered. Agent-centered deontology is concerned with permissions and obligations to act toward other agents, the typical example being parents’ duty to protect and nurture their children. Notice the immediate departure from consequentialism, here, since the latter is an agent-neutral type of ethics (we have seen that it has trouble justifying the idea of special treatment of relatives or friends). Where do such agent-relative obligations come from? From the fact that we make explicit or implicit promises to some agents but not others. By bringing my child into the world, for instance, I make a special promise to that particular individual, a promise that I do not make to anyone else’s children. While this certainly doesn’t mean that I don’t have duties toward other children (like inflicting no intentional harm), it does mean that I have additional duties toward my own children as a result of the simple fact that they are mine.
  • Agent-centered deontology gets into trouble because of its close philosophical association to some doctrines that originated within Catholic theology, like the idea of double effect. (I should immediately clarify that the trouble is not due to the fact that these doctrines are rooted in a religious framework, it’s their intrinsic moral logic that is at issue here.) For instance, for agent-centered deontologists we are morally forbidden from killing innocent others (reasonably enough), but this prohibition extends even to cases when so doing would actually save even more innocents.
  • Those familiar with trolleology will recognize one of the classic forms of the trolley dilemma here: is it right to throw an innocent person in front of the out of control trolley in order to save five others? For consequentialists the answer is a no-brainer: of course yes, you are saving a net of four lives! But for the deontologist you are now using another person (the innocent you are throwing to stop the trolley) as a means to an end, thus violating one of the forms of Kant’s imperative:“Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means to an end.”
  • The other form, in case you are wondering, is: “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.”
  • Victim-centered deontologies are right- rather than duty-based, which of course does raise the question of why we think of them as deontological to begin with.
  • The fundamental idea about victim-centered deontology is the right that people have not to be used by others without their consent. This is were we find Robert Nozick-style libertarianism, which I have already criticized on this blog. One of the major implications of this version of deontology is that there is no strong moral duty to help others.
  • contractarian deontological theories. These deal with social contracts of the type, for instance, discussed by John Rawls in his theory of justice. However, I will devote a separate post to contractarianism, in part because it is so important in ethics, and in part because one can argue that contractarianism is really a meta-ethical theory, and therefore does not strictly fall under deontology per se.
  • deontological theories have the advantage over consequentialism in that they account for special concerns for one’s relatives and friends, as we have seen above. Consequentialism, by comparison, comes across as alienating and unreasonably demanding. Another advantage of deontology over consequentialism is that it accounts for the intuition that even if an act is not morally demanded it may still be praiseworthy. For a consequentialist, on the contrary, if something is not morally demanded it is then morally forbidden. (Another way to put this is that consequentialism is a more minimalist approach to ethics than deontology.) Moreover, deontology also deals much better than consequentialism with the idea of rights.
  • deontological theories run into the problem that they seem to give us permission, and sometimes even require, to make things actually morally worse in the world. Indeed, a strict deontologist could actually cause human catastrophes by adhering to Kant’s imperative and still think he acted morally (Kant at one point remarked that it is “better the whole people should perish” than that injustice be done — one wonders injustice to whom, since nobody would be left standing). Deontologists also have trouble dealing with the seemingly contradictory ideas that our duties are categorical (i.e., they do not admit of exceptions), and yet that some duties are more important than others. (Again, Kant famously stated that “a conflict of duties is inconceivable” while forgetting to provide any argument in defense of such a bold statement.)
  • . One famous attempt at this reconciliation was proposed by Thomas Nagel (he of “what is it like to be a bat?” fame). Nagel suggested that perhaps we should be consequentialists when it comes to agent-neutral reasoning, and deontologists when we engage in agent-relative reasoning. He neglected to specify, however, any non-mysterious way to decide what to do in those situations in which the same moral dilemma can be seen from both perspectives.
Weiye Loh

BrainGate gives paralysed the power of mind control | Science | The Observer - 0 views

  • brain-computer interface, or BCI
  • is a branch of science exploring how computers and the human brain can be meshed together. It sounds like science fiction (and can look like it too), but it is motivated by a desire to help chronically injured people. They include those who have lost limbs, people with Lou Gehrig's disease, or those who have been paralysed by severe spinal-cord injuries. But the group of people it might help the most are those whom medicine assumed were beyond all hope: sufferers of "locked-in syndrome".
  • These are often stroke victims whose perfectly healthy minds end up trapped inside bodies that can no longer move. The most famous example was French magazine editor Jean-Dominique Bauby who managed to dictate a memoir, The Diving Bell and the Butterfly, by blinking one eye. In the book, Bauby, who died in 1997 shortly after the book was published, described the prison his body had become for a mind that still worked normally.
  • ...9 more annotations...
  • Now the project is involved with a second set of human trials, pushing the technology to see how far it goes and trying to miniaturise it and make it wireless for a better fit in the brain. BrainGate's concept is simple. It posits that the problem for most patients does not lie in the parts of the brain that control movement, but with the fact that the pathways connecting the brain to the rest of the body, such as the spinal cord, have been broken. BrainGate plugs into the brain, picks up the right neural signals and beams them into a computer where they are translated into moving a cursor or controlling a computer keyboard. By this means, paralysed people can move a robot arm or drive their own wheelchair, just by thinking about it.
  • he and his team are decoding the language of the human brain. This language is made up of electronic signals fired by billions of neurons and it controls everything from our ability to move, to think, to remember and even our consciousness itself. Donoghue's genius was to develop a deceptively small device that can tap directly into the brain and pick up those signals for a computer to translate them. Gold wires are implanted into the brain's tissue at the motor cortex, which controls movement. Those wires feed back to a tiny array – an information storage device – attached to a "pedestal" in the skull. Another wire feeds from the array into a computer. A test subject with BrainGate looks like they have a large plug coming out the top of their heads. Or, as Donoghue's son once described it, they resemble the "human batteries" in The Matrix.
  • BrainGate's highly advanced computer programs are able to decode the neuron signals picked up by the wires and translate them into the subject's desired movement. In crude terms, it is a form of mind-reading based on the idea that thinking about moving a cursor to the right will generate detectably different brain signals than thinking about moving it to the left.
  • The technology has developed rapidly, and last month BrainGate passed a vital milestone when one paralysed patient went past 1,000 days with the implant still in her brain and allowing her to move a computer cursor with her thoughts. The achievement, reported in the prestigious Journal of Neural Engineering, showed that the technology can continue to work inside the human body for unprecedented amounts of time.
  • Donoghue talks enthusiastically of one day hooking up BrainGate to a system of electronic stimulators plugged into the muscles of the arm or legs. That would open up the prospect of patients moving not just a cursor or their wheelchair, but their own bodies.
  • If Nagle's motor cortex was no longer working healthily, the entire BrainGate project could have been rendered pointless. But when Nagle was plugged in and asked to imagine moving his limbs, the signals beamed out with a healthy crackle. "We asked him to imagine moving his arm to the left and to the right and we could hear the activity," Donoghue says. When Nagle first moved a cursor on a screen using only his thoughts, he exclaimed: "Holy shit!"
  • BrainGate and other BCI projects have also piqued the interest of the government and the military. BCI is melding man and machine like no other sector of medicine or science and there are concerns about some of the implications. First, beyond detecting and translating simple movement commands, BrainGate may one day pave the way for mind-reading. A device to probe the innermost thoughts of captured prisoners or dissidents would prove very attractive to some future military or intelligence service. Second, there is the idea that BrainGate or other BCI technologies could pave the way for robot warriors controlled by distant humans using only their minds. At a conference in 2002, a senior American defence official, Anthony Tether, enthused over BCI. "Imagine a warrior with the intellect of a human and the immortality of a machine." Anyone who has seen Terminator might worry about that.
  • Donoghue acknowledges the concerns but has little time for them. When it comes to mind-reading, current BrainGate technology has enough trouble with translating commands for making a fist, let alone probing anyone's mental secrets
  • As for robot warriors, Donoghue was slightly more circumspect. At the moment most BCI research, including BrainGate projects, that touch on the military is focused on working with prosthetic limbs for veterans who have lost arms and legs. But Donoghue thinks it is healthy for scientists to be aware of future issues. "As long as there is a rational dialogue and scientists think about where this is going and what is the reasonable use of the technology, then we are on a good path," he says.
  •  
    The robotic arm clutched a glass and swung it over a series of coloured dots that resembled a Twister gameboard. Behind it, a woman sat entirely immobile in a wheelchair. Slowly, the arm put the glass down, narrowly missing one of the dots. "She's doing that!" exclaims Professor John Donoghue, watching a video of the scene on his office computer - though the woman onscreen had not moved at all. "She actually has the arm under her control," he says, beaming with pride. "We told her to put the glass down on that dot." The woman, who is almost completely paralysed, was using Donoghue's groundbreaking technology to control the robot arm using only her thoughts. Called BrainGate, the device is implanted into her brain and hooked up to a computer to which she sends mental commands. The video played on, giving Donoghue, a silver-haired and neatly bearded man of 62, even more reason to feel pleased. The patient was not satisfied with her near miss and the robot arm lifted the glass again. After a brief hover, the arm positioned the glass on the dot.
Weiye Loh

Net-Neutrality: The First Amendment of the Internet | LSE Media Policy Project - 0 views

  • debates about the nature, the architecture and the governing principles of the internet are not merely technical or economic discussions.  Above all, these debates have deep political, social, and cultural implications and become a matter of public, national and global interest.
  • In many ways, net neutrality could be considered the first amendment of the internet; no pun intended here. However, just as with freedom of speech the principle of net neutrality cannot be approached as absolute or as a fetish. Even in a democracy we cannot say everything applies all the time in all contexts. Limiting the core principle of freedom of speech in a democracy is only possible in very specific circumstances, such as harm, racism or in view of the public interest. Along the same lines, compromising on the principle of net neutrality should be for very specific and clearly defined reasons that are transparent and do not serve commercial private interests, but rather public interests or are implemented in view of guaranteeing an excellent quality of service for all.
  • One of the only really convincing arguments of those challenging net neutrality is that due to the dramatic increases in streaming activity and data-exchange through peer-to-peer networks, the overall quality of service risks being compromised if we stick to data being treated on a first come first serve basis. We are being told that popular content will need to be stored closer to the consumer, which evidently comes at an extra cost.
  • ...5 more annotations...
  • Implicitly two separate debates are being collapsed here and I would argue that we need to separate both. The first one relates to the stability of the internet as an information and communication infrastructure because of the way we collectively use that infrastructure. The second debate is whether ISPs and telecommunication companies should be allowed to differentiate in their pricing between different levels of quality of access, both towards consumers and content providers.
  • Just as with freedom of speech, circumstances can be found in which the principle while still cherished and upheld, can be adapted and constrained to some extent. To paraphrase Tim Wu (2008), the aspiration should still be ‘to treat all content, sites, and platforms equally’, but maybe some forms of content should be treated more equally than others in order to guarantee an excellent quality of service for all. However, the societal and political implications of this need to be thought through in detail and as with freedom of speech itself, it will, I believe, require strict regulation and conditions.
  • In regards to the first debate on internet stability, a case can be made for allowing internet operators to differentiate between different types of data with different needs – if for any reason the quality of service of the internet as a whole cannot be guaranteed anymore. 
  • Concerning the second debate on differential pricing, it is fair to say that from a public interest and civic liberty perspective the consolidation and institutionalization of a commercially driven two-tiered internet is not acceptable and impossible to legitimate. As is allowing operators to differentiate in the quality of provision of certain kind of content above others.  A core principle such as net neutrality should never be relinquished for the sake of private interests and profit-making strategies – on behalf of industry or for others. If we need to compromise on net neutrality it would always have to be partial, to be circumscribed and only to improve the quality of service for all, not just for the few who can afford it.
  • Separating these two debates exposes the crux of the current net-neutrality debate. In essence, we are being urged to give up on the principle of net-neutrality to guarantee a good quality of service.  However, this argument is actually a pre-text for the telecom industry to make content-providers pay for the facilitation of access to their audiences – the internet subscribers. And this again can be linked to another debate being waged amongst content providers: how do we make internet users pay for the content they access online? I won’t open that can of worms here, but I will make my point clear.  Telecommunication industry efforts to make content providers pay for access to their audiences do not offer legitimate reasons to suspend the first amendment of the internet.
Weiye Loh

When Rationalization Masquerades as Reason - NYTimes.com - 0 views

  •  
    It's awfully hard for human beings to step outside of their predispositions and gut reactions, which may be why it's so hard to have a rational discussion about the findings of science, which is all about sidelining such distorting biases. But that makes it even harder to have a rational discussion about contentious issues when there is a paucity of firm data - and thus a lot of running room for advocates of one stripe or another, along with free rein for feelings.
Weiye Loh

"The Particle-Emissions Dilemma" by Henning Rodhe | Project Syndicate - 0 views

  • according to the United Nations’ Intergovernmental Panel on Climate Change, the cooling effect of white particles may counteract as much as about half of the warming effect of carbon dioxide. So, if all white particles were removed from the atmosphere, global warming would increase considerably.CommentsView/Create comment on this paragraphThe dilemma is that all particles, whether white or black, constitute a serious problem for human health. Every year, an estimated two million people worldwide die prematurely, owing to the effects of breathing polluted air. Furthermore, sulfur-rich white particles contribute to the acidification of soil and water.
  • Naturally, measures targeting soot and other short-lived particles must not undermine efforts to reduce CO2 emissions. In the long term, emissions of CO2 and other long-lived greenhouse gases constitute the main problem. But a reduction in emissions of soot (and other short-lived climate pollutants) could alleviate the pressures on the climate in the coming decades.
  • what do we do about white particles? How do we weigh improved health and reduced mortality rates for hundreds of thousands of people against the serious consequences of global warming?CommentsView/Create comment on this paragraphIt is difficult to imagine that any country’s officials would knowingly submit their population to higher health risks by not acting to reduce white particles solely because they counteract global warming. On the contrary, sulfur emissions have been reduced over the last few decades in both Europe and North America, owing to a desire to promote health and counter acidification; and China, too, seems to be taking measures to reduce sulfur emissions and improve the country’s terrible air quality. But, in other parts of the world where industrialization is accelerating, sulfur emissions continue to increase.
  • ...2 more annotations...
  • Nobel laureate Paul Crutzen has suggested another solution: manipulate the climate by releasing white sulfur particles high up in the stratosphere, where they would remain for several years, exerting a proven cooling effect on Earth’s climate without affecting human health. In 1991, the eruption of Mount Pinatubo in the Philippines created a haze of sulfur in the higher atmosphere that cooled the entire planet approximately half a degree Celsius for two years afterwards.
  • View/Create comment on this paragraphOther methods of geoengineering – that is, consciously manipulating the climate – include painting the roofs of houses white in order to increase the reflection of sunlight, covering deserts with reflective plastic, and fertilizing the seas with iron in order to increase the absorption of CO2.
  •  
    Particle emissions into Earth's atmosphere affect both human health and the climate. So we should limit them, right? For health reasons, yes, we should indeed do that; but, paradoxically, limiting such emissions would cause global warming to increase
Weiye Loh

Rubber data | plus.maths.org - 0 views

  • Maps are great because our brains are good at making sense of pictures. So representing data in a visual form is a good way of understanding it. The question is how.
  • in reality things are more complicated. You'll probably have thousands of books and customers. Each book now comes, not with a pair of numbers, but with a huge long list containing the rating of each customer or perhaps a blank if a specific customer hasn't rated the book. Now you can't simply plot the data and spot the pattern. This is where topology comes to the rescue: it gives a neat way of turning shapes into networks. Suppose you've got a wobbly circle as in the figure below. You can cover it by overlapping regions and then draw a dot on a piece of paper for each region. You then connect dots corresponding to overlapping regions by an edge. The network doesn't retain the wobbliness of the shape, that information has been lost, but its topology, the fact that it's circular, is clearly visible. And the great thing is that it doesn't matter what kind of covering you use to make your network. As long as the regions are small enough — the resolution is high enough — the network will draw out the topology of the shape.
  •  
    The reason why even the most bewildered tourist can find their way around the tube network easily is that the map does away with geographical accuracy in favour of clarity. The map retains the general shape of the tube network, the way the lines connect, but it distorts the actual distances between stations and pretends that trains only run in straight lines, horizontally, vertically or inclined at 45 degree angles. That isn't how they run in reality, but it makes the map a lot easier to read. It's a topological map named after an area of maths, topology, which tries to understand objects in terms of their overall shape rather than their precise geometry. It's also known as rubber sheet geometry because you're allowed to stretch and squeeze shapes, as long as you don't tear them.
Weiye Loh

Balderdash: Liberalism and Tolerance - 0 views

  •  
    "Politics can be a sensitive subject and a number of SNS users have decided to block, unfriend, or hide someone because of their politics or posting activities. In all, 18% of social networking site users have taken one of those steps... Liberals are the most likely to have taken each of these steps to block, unfriend, or hide. In all, 28% of liberals have blocked, unfriended, or hidden someone on SNS because of one of these reasons, compared with 16% of conservatives and 14% of moderates" Tom Lehrer sums up the intolerance of the philosophy of tolerance best: "I know that there are people who do not love their fellow man, and I hate people like that!"
Weiye Loh

Rationally Speaking: Truth from fiction: truth or fiction? - 0 views

  • Literature teaches us about life. Literature helps us understand the world.
  • this belief in truth-from-fiction is the party line for those who champion the merits of literature. Eminent English professor and critic Harold Bloom proclaims, in his bestselling How to Read and Why, that one of the main reasons to read literature is because "we require knowledge, not just of self and others, but of the way things are."
  • why would we expect literature to be a reliable source of knowledge about "the way things are"? After all, the narratives which are the most gripping and satisfying to read are not the most representative of how the world actually works. They have dramatic resolutions, foreshadowing, conflict, climax, and surprise. People tend to get their comeuppance after they misbehave. People who pursue their dream passionately tend to succeed. Disaster tends to strike when you least expect it. These narratives are over-represented in literature because they're more gratifying to read; why would we expect to learn from them about "the way things are"?
  • ...2 more annotations...
  • even if authors were all trying to faithfully represent the world as they perceived it, why would we expect their perceptions to be any more universally true than anyone else's?
  • I can't see any reason to give any more weight to the implicit arguments of a novel than we would give to the explicit arguments of any individual person. And yet when we read a novel or study it in school, especially if it's a hallowed classic, we tend to treat its arguments as truths.
  •  
    FRIDAY, JUNE 18, 2010 Truth from fiction: truth or fiction?
Weiye Loh

Should This Be the Last Generation? - Opinionator Blog - NYTimes.com - 0 views

  • Have you ever thought about whether to have a child? If so, what factors entered into your decision? Was it whether having children would be good for you, your partner and others close to the possible child, such as children you may already have, or perhaps your parents?
  • Some may also think about the desirability of adding to the strain that the nearly seven billion people already here are putting on our planet’s environment. But very few ask whether coming into existence is a good thing for the child itself.
  • we think it is wrong to bring into the world a child whose prospects for a happy, healthy life are poor, but we don’t usually think the fact that a child is likely to have a happy, healthy life is a reason for bringing the child into existence. This has come to be known among philosophers as “the asymmetry” and it is not easy to justify.
  • ...5 more annotations...
  • How good does life have to be, to make it reasonable to bring a child into the world? Is the standard of life experienced by most people in developed nations today good enough to make this decision unproblematic
  • Arthur Schopenhauer held that even the best life possible for humans is one in which we strive for ends that, once achieved, bring only fleeting satisfaction.
  • One of Benatar’s arguments trades on something like the asymmetry noted earlier. To bring into existence someone who will suffer is, Benatar argues, to harm that person, but to bring into existence someone who will have a good life is not to benefit him or her.
  • Hence continued reproduction will harm some children severely, and benefit none.
  • human lives are, in general, much less good than we think they are. We spend most of our lives with unfulfilled desires, and the occasional satisfactions that are all most of us can achieve are insufficient to outweigh these prolonged negative states.
  •  
    June 6, 2010, 5:15 PM Should This Be the Last Generation? By PETER SINGER
‹ Previous 21 - 40 of 226 Next › Last »
Showing 20 items per page