Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged third

Rss Feed Group items tagged

Weiye Loh

Leong Sze Hian stands corrected? | The Online Citizen - 0 views

  • In your article, you make the argument that “Straits Times Forum Editor, was merely amending his (my) letter to cite the correct statistics. “For example, the Education Minister said “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” - But, Mr Samuel Wee wrote “His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination”.” Kind sir, the statistics state that 1 in 2 are in the top 66.6% (Which, incidentally, includes the top fifth of the bottom 50%!) Does it not stand to reason, then, that if 50% are in the top 66.6%, the remaining 50% are in the bottom 33.3%, as I stated in my letter?
  • Also, perhaps you were not aware of the existence of this resource, but here is a graph from the Straits Times illustrating the fact that only 10% of children from one-to-three room flats make it to university–which is to say, 90% of them don’t. http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf I look forward to your reply, Mr Leong. Thank you for taking the time to read this message.
  • we should, wherever possible, try to agree to disagree, as it is healthy to have and to encourage different viewpoints.
    • Weiye Loh
       
      Does that mean that every viewpoint can and should be accepted as correct to encourage differences? 
  • ...4 more annotations...
  • If I say I think it is fair in Singapore, because half of the bottom one-third of the people make it to the top two-thirds, it does not mean that someone can quote me and say that I said what I said because half the bottom one-third of people did not make it. I think it is alright to say that I do not agree entirely with what was said, because does it also mean on the flip side that half of the bottom one-third of the people did not make it? This is what I mean by quoting one out of context, by using statistics that I did not say, and implying that I did, or by innuendo.
  • Moreover, depending on the methodology, definition, sampling, etc, half of the  bottom one-third of the people making it, does not necessary mean that half did not make it, because some may not be in the population because of various reasons, like emigration, not turning up, transfer, whether adjustments are made  for the mobility of people up or down the social strata over time, etc. If I did not use a particular statistic to state my case, for example, I don’t think it is appropriate to quote me and say that you agree with me by citing statistics from a third party source, like the MOE chart in the Straits Times article, instead of quoting the statistics that I said.
  • I cannot find anything in any of the media reports to say with certainty that the Minister backed up his remarks with direct reference to the MOE chart. There is also nothing in the narrative that only 10 per cent  of children from one-to-three room flats make it to university – which is to say, 90 per cent  of them don’t. The ’90 per cent’ cannot be attributed to what the minister said, as at best it is the writer’s interpretation of the MOE chart.
  • Interesting exchange of letters. Samuel’s interpretation of the statistics provided by Ng Eng Hen and ST is correct. There is little doubt about it. While I can see where Leong Sze Hian is coming from, I don’t totally agree with him. Specifically, Samuel’s first statement (only ~10% of students living in 1-3 room flat make it to university) is directed at ST’s report that education is a good social leveller but not at Ng. It is therefore a valid point to make.
Weiye Loh

Singapore does not have Third World Living Standards | the kent ridge common - 0 views

  • I apologise for this long overdue article to highlight the erroneous insinuations by my fellow KRC writer’s post, UBS: Singapore has Third World Living Standards.
  • The Satay Club post’s title was “UBS: Singapore has Russian Standard of Living”. The Original UBS report was even less suggestive, and in fact hardly made any value judgment at all. The original UBS report just presented a whole list of statistics, according to whichever esoteric mathematical calculation they used
  • As my JC economics teacher quipped, “If you abuse the statistics long enough, it will confess.” On one hand, UBS has not suggested that Singapore has third world living standards. On the other hand, I think it is justified to question how my KRC writer has managed to conclude from these statistics that Singapore has “Third World Living Standards”.
  • ...2 more annotations...
  • The terminology of “Third World” and “First World” are also problematic. The more “politically correct” terms used now are “developing” and “developed”. Whatever the charge, whatever your choice of terminology, Moscow and Tallinn are hardly “Third World” or “developing”. I have never been there myself, and unfortunately have no personal account to give, but a brief look at the countries listed below Singapore in the Wage Levels index- Beijing, Shanghai, Santiago de Chile, Buenos Aires, Delhi, Mexico City even – would make me cautious about abstracting from these statistics any indication at all about “living standards”.
  • The living “habits” and rhythms of life in all these various cities are as heterogeneous as these statistics are homogenizing, by placing them all on the same scale of measurement. This is not to say that we cannot have fruitful comparatives across societies – but that these statistics are not sufficient for such a venture. At the very least UBS’ mathematical methodology requires a greater analysis which was not provided in the previous KRC article. The burden of proof here is really on my fellow KRC writer to show that Singapore has Third World living standards, and the analysis that has been offered needs more to work.
Weiye Loh

Straits Times Forum explains why it heavily edited letter | The Online Citizen - 0 views

  • 1. You stated we wrongly replaced the statistic you cited with another from Ms Rachel Chang’s article on March 8 (“School system still the ‘best way to move up’). Your original letter “It is indeed heartwarming to learn that 90% of children from one-to-three-room flats do not make it to university.” Reasons we edited it: Factual error, sense. There were two problems with your sentence. First, it was contradictory and didn’t make sense.Your original sentence cannot mean what it says unless you were elated over the fact that nine in 10 children from less well-off homes failed to qualify for university. So we edited it for sense, i.e., underscoring a positive feeling (heartwarming) with a positive fact; rather than the self-penned irony of a positive feeling (heartwarming) backed by a negative fact (90% failure rate to university admission by less well off children). That was why we replaced the original statistic with the only one in Ms Chang’s March 8 report that matched your elation, that is, that 50 percent of less well off children found tertiary success.
  • (Visa: Firstly, I find it hard to believe that nobody in the Straits Times office understands the meaning of sarcasm. Secondly, there was NO FACTUAL ERROR. Allow me to present to you the statistics, direct from The Straits Times themselves: http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf )
  • Second, we replaced your original statistic because it did not exist in Ms Chang’s March 8 front-page report. Ms Chang quoted that statistic in a later article (“Poor kids need aspiration: March 18; paragraph 5), which appeared after your letter was published. (Visa: It did not exist? Pay careful attention to the URL: http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf . Look at the number. 20110308. 2011 03 08. 8th March 2011.)
  • ...7 more annotations...
  • 2. Your original letter “His (Education Minister Dr Ng) statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination. “ Reason we edited it: Factual error
  • “His statement is backed by the statistic that about 50 per cent of children from the bottom third of the socio-economic bracket score within the top two-thirds of their Primary School Leaving Examination cohort. (Para 3 of Ms Chang’s March 8 report). (Visa:  THIS IS NOT A FACTUAL ERROR. If 50% of a group score in the top two-thirds, then the remaining 50% of the group, by simple process of elimination, must score in the bottom third!)
  • You can assume that the stats are wrong, but you CANNOT CHANGE it and CONTINUE to use the contributor’s name! Where is your journalist moral, ethic, and basic human decency? Since it is YOUR meaning, and not the writer’s, don’t it mean that you ABUSE, FABRICATE, and LIE to the public that that was by Samuel?
  • Either you print a news column or delete the letter. At least have some basic courtesy to call and ASK the writer for changes. Even a kid knows that its basic human decency to ask. HOW come you, as a grown man, YAP KOON HONG, can’t?
  • “So we edited it for sense ……. That was why we replaced the original statistic with the only one in Ms Chang’s March 8 report that matched your elation ……” and “So, we needed to provide the context to the minister’s statement in order to retain the sense of your meaning.” These are extraordinary statements. My understanding is that editors edit for clarity and brevity. It is extraordinary and perhaps only in Singapore that editors also edit for “sense”.
  • 50% make it to university therefore the other 50% did not make it. This kind of reasoning only works in primary or secondary school maths. In the real world, academia and journalism, the above would be considered a logical fallacy. To explain why, one must consider the fact that not going to university is not the same as “not making it”. World class musicians, sports, volunteer work, oversease universities, travel, these are just a few of the reasons why we can’t just do a simple calculation when it comes to statistics. Bill Gates didn’t go to university, would we classify him as “not making it” Sarcasm has no place in journalism as it relies on visual and vocal indicators to interpret. I live in Washington, and if the above letter was sent to any newspaper it would be thrown out with all the other garbage faster than you could say freedom of speech. At least the editor in question here bothered to try his best to get the letter published.
  • “we felt your opinion deserved publication” Please, Yap Koon Hong, what you published was the very opposite of his opinion! As you yourself admitted, Samuel’s letter was ironic in nature, but you removed all traces of irony and changed the statistics to fabricate a sense of “elation” that Samuel did not mean to convey!
Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

Weiye Loh

Want your opinions distorted and misrepresented? Write in to The Straits Time... - 0 views

  • Letter sent by by my good friend Samuel C. Wee to ST on the 8th of March, quoting statistics from their Page One infographic: (Read this closely!) I read with keen interest the news that social mobility in Singapore’s education system is still alive and well (“School system still ‘best way to move up’”; Monday). It is indeed heartwarming to learn that only 90% of children from one-to-three-room flats do not make it to university. I firmly agree with our Education Minister Dr Ng Eng Hen, who declared that “education remains the great social leveller in Singaporean society”. His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination. In recent years, there has been much debate about elitism and the impact that a family’s financial background has on a child’s educational prospects. Therefore, it was greatly reassuring to read about Dr Ng’s great faith in our “unique, meritocratic Singapore system”, which ensures that good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students. I would like to commend Ms Rachel Chang on her outstanding article. On behalf of the financially disadvantaged students of Singapore, I thank the fine journalists of the Straits Times for their tireless work in bringing to Singaporeans accurate and objective reporting.
  • What was actually published last Friday, March 18th 2011 A reassuring experience of meritocratic system I READ with keen interest the news that social mobility in Singapore’s education system is still alive and well (‘School system still ‘best way to move up”; March 8). It is indeed heartwarming to learn that almost 50 per cent of children from one- to three-room flats make it to university and polytechnics. I firmly agree with Education Minister Ng Eng Hen, who said that education remains the great social leveller in Singapore society. His statement is backed by the statistic that about 50 per cent of children from the bottom third of the socio-economic bracket score within the top two-thirds of their Primary School Leaving Examination cohort. There has been much debate about elitism and the impact that a family’s financial background has on a child’s educational prospects. Therefore, it was reassuring to read about Dr Ng’s own experience of the ‘unique, meritocratic Singapore system’: he grew up in a three-room flat with five other siblings, and his medical studies at the National University of Singapore were heavily subsidised; later, he trained as a cancer surgeon in the United States using a government scholarship. The system also ensures that good, able students from the middle- and high-income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students.
  • To give me the byline would be an outrageous flattery and a gross injustice to the forum editors of ST, who took the liberty of taking my observations about the statistics and subtly replacing them with more politically correct (but significantly and essentially different) statistics.
  • ...3 more annotations...
  • Of course, ST reserves the right to edit my letter for clarity and length. When said statistics in question were directly taken from their original article, though, one has to wonder if there hasn’t been a breakdown in communication over there. I’m dreadfully sorry, forum editors, I should have double-checked my original source (your journalist Ms Rachel Chang) before sending my letter.
  • take a look at how my pride in our meritocratic system in my originally letter has been transfigured into awe at Dr Ng’s background, for example! Dear friends, when an editor takes the time and effort to not just paraphrase but completely and utterly transform your piece in both intent and meaning, then what can we say but bravo.
  • There are surely no lazy slackers over at the Straits Times; instead we have evidently men and women who dedicate time and effort to correct their misguided readers, and protect them from the shame of having their real opinions published.
Weiye Loh

A lesson in citing irrelevant statistics | The Online Citizen - 0 views

  • Statistics that are quoted, by themselves, may be quite meaningless, unless they are on a comparative basis. To illustrate this, if we want to say that Group A (poorer kids) is not significantly worse off than Group B (richer kids), then it may be pointless to just cite the statistics for Group A, without Group B’s.
  • “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” “One in six scores in the top one-third at PSLE” What we need to know for comparative purposes, is the percentage of richer kids who scores in the top two-thirds too.
  • “… one in five scores in the top 30% at O and A levels… One in five goes to university and polys” What’s the data for richer kids? Since the proportion of the entire population going to university and polys has increased substantially, this clearly shows that poorer kids are worse off!
  • ...4 more annotations...
  • The Minister was quoted as saying: “My  parents had six children.  My first home as a young boy was a rental flat in Zion Road.  We shared it as tenants with other families” Citing individuals who made it, may be of no “statistical” relevance, as what we need are the statistics as to the proportion of poorer kids to richer kids, who get scholarships, proportional to their representation in the population.
  • “More spent on primary and secondary/JC schools.  This means having significantly more and better teachers, and having more programmes to meet children’s specific needs” What has spending more money, which what most countries do, got to do with the argument whether poorer kids are disadvantaged?
  • Straits Times journalist, Li XueYing put the crux of the debate in the right perspective: “Dr Ng had noted that ensuring social mobility “cannot mean equal outcomes, because students are inherently different”, But can it be that those from low-income families are consistently “inherently different” to such an extent?”
  • Relevant statistics Perhaps the most damning statistics that poorer kids are disadvantaged was the chart from the Ministry of Education (provided by the Straits Times), which showed that the percentage of Primary 1 pupils who lived in 1 to 3-room HDB flats and subsequently progressed to University and/or Polytechnic, has been declining since around 1986.
Weiye Loh

ST Forum Editor was right after all | The Online Citizen - 0 views

  • I refer to the article “Straits Times! Why you edit until like that?” (theonlinecitizen, Mar 24). In my view, the Straits Times Forum Editor was not wrong to edit the letter.
  • From a statistical pespective, the forum letter writer, Mr Samuel Wee, was quoting the wrong statistics.
  • For example, the Education Minister said “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” - But, Mr Samuel Wee wrote “His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination”. Another example is Mr Wee’s: “it is indeed heartwarming to learn that only 90% of children from one-to-three-room flats do not make it to university”, when the Straits Times article “New chapter in the Singapore Story”http://pdfcast.org/pdf/new-chapter-in-singapore-story of 8 March, on the Minister’s speech in Parliament, clearly showed in the graph “Progression to Unis and Polys” (Source: MOE  (Ministry of Eduction)), that the “percentage of P1 pupils who lived in 1- to 3-room HDB flats and subsequently progressed to tertiary education”, was about 50 per cent, and not the ’90 per cent who do not make it’ cited by Mr Samuel Wee.
  • ...7 more annotations...
  • The whole point of Samuel Wee’s letter is to present Dr Ng’s statistics from a different angle, so as to show that things are not as rosy as Dr Ng made them seem. As posters above have pointed out, if 50% of poor students score in the top 2/3s, that means the other 50% score in the bottom 1/3. In other words, poor students still score disproportionately lower grades. As for the statistic that 90% of poor students do not make it to university, this was shown a graph provided in the ST. You can see it here: http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf
  • Finally, Dr Ng did say: “[Social mobility] cannot be about neglecting those with abilities, just because they come from middle-income homes or are rich. It cannot mean holding back those who are able so that others can catch up.” Samuel Wee paraphrased this as: “…good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students.” I think it was an accurate paraphrase, because that was essentially what Dr Ng was saying. Samuel Wee’s paraphrase merely makes the callousness of Dr Ng’s remark stand out more clearly.
  • As to Mr Wee’s: “Therefore, it was greatly reassuring to read about Dr Ng’s great faith in our “unique, meritocratic Singapore system”, which ensures that good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students”, there was nothing in the Minister’s speech, Straits Times and all other media reports, that quoted the Minister, in this context. In my opinion, the closest that I could find in all the reports, to link in context to the Minister’s faith in our meritocratic system, was what the Straits Times Forum Editor edited – “Therefore, it was reassuring to read about Dr Ng’s own experience of the ‘unique, meritocratic Singapore system’: he grew up in a three-room flat with five other siblings, and his medical studies at the National University of Singapore were heavily subsidised; later, he trained as a cancer surgeon in the United States using a government scholarship”.
  • To the credit of the Straits Times Forum Editor, inspite of the hundreds of letters that he receives in a day, he took the time and effort to:- Check the accuracy of the letter writer’s ‘quoted’ statistics Find the correct ‘quoted’ statistics to replace the writer’s wrongly ‘quoted’ statistics Check for misquotes out of context (in this case, what the Education Minister actually said), and then find the correct quote to amend the writer’s statement
  • Kind sir, the statistics state that 1 in 2 are in the top 66.6% (Which, incidentally, includes the top fifth of the bottom 50%!) Does it not stand to reason, then, that if 50% are in the top 66.6%, the remaining 50% are in the bottom 33.3%, as I stated in my letter?
  • Also, perhaps you were not aware of the existence of this resource, but here is a graph from the Straits Times illustrating the fact that only 10% of children from one-to-three room flats make it to university–which is to say, 90% of them don’t. http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf
  • The writer made it point to say that only 90% did not make it to university. It has been edited to say 50% made it to university AND POLYTECHNIC. Both are right, and that one is made to make the government look good
Weiye Loh

New voting methods and fair elections : The New Yorker - 0 views

  • history of voting math comes mainly in two chunks: the period of the French Revolution, when some members of France’s Academy of Sciences tried to deduce a rational way of conducting elections, and the nineteen-fifties onward, when economists and game theorists set out to show that this was impossible
  • The first mathematical account of vote-splitting was given by Jean-Charles de Borda, a French mathematician and a naval hero of the American Revolutionary War. Borda concocted examples in which one knows the order in which each voter would rank the candidates in an election, and then showed how easily the will of the majority could be frustrated in an ordinary vote. Borda’s main suggestion was to require voters to rank candidates, rather than just choose one favorite, so that a winner could be calculated by counting points awarded according to the rankings. The key idea was to find a way of taking lower preferences, as well as first preferences, into account.Unfortunately, this method may fail to elect the majority’s favorite—it could, in theory, elect someone who was nobody’s favorite. It is also easy to manipulate by strategic voting.
  • If the candidate who is your second preference is a strong challenger to your first preference, you may be able to help your favorite by putting the challenger last. Borda’s response was to say that his system was intended only for honest men.
  • ...15 more annotations...
  • After the Academy dropped Borda’s method, it plumped for a simple suggestion by the astronomer and mathematician Pierre-Simon Laplace, who was an important contributor to the theory of probability. Laplace’s rule insisted on an over-all majority: at least half the votes plus one. If no candidate achieved this, nobody was elected to the Academy.
  • Another early advocate of proportional representation was John Stuart Mill, who, in 1861, wrote about the critical distinction between “government of the whole people by the whole people, equally represented,” which was the ideal, and “government of the whole people by a mere majority of the people exclusively represented,” which is what winner-takes-all elections produce. (The minority that Mill was most concerned to protect was the “superior intellects and characters,” who he feared would be swamped as more citizens got the vote.)
  • The key to proportional representation is to enlarge constituencies so that more than one winner is elected in each, and then try to align the share of seats won by a party with the share of votes it receives. These days, a few small countries, including Israel and the Netherlands, treat their entire populations as single constituencies, and thereby get almost perfectly proportional representation. Some places require a party to cross a certain threshold of votes before it gets any seats, in order to filter out extremists.
  • The main criticisms of proportional representation are that it can lead to unstable coalition governments, because more parties are successful in elections, and that it can weaken the local ties between electors and their representatives. Conveniently for its critics, and for its defenders, there are so many flavors of proportional representation around the globe that you can usually find an example of whatever point you want to make. Still, more than three-quarters of the world’s rich countries seem to manage with such schemes.
  • The alternative voting method that will be put to a referendum in Britain is not proportional representation: it would elect a single winner in each constituency, and thus steer clear of what foreigners put up with. Known in the United States as instant-runoff voting, the method was developed around 1870 by William Ware
  • In instant-runoff elections, voters rank all or some of the candidates in order of preference, and votes may be transferred between candidates. The idea is that your vote may count even if your favorite loses. If any candidate gets more than half of all the first-preference votes, he or she wins, and the game is over. But, if there is no majority winner, the candidate with the fewest first-preference votes is eliminated. Then the second-preference votes of his or her supporters are distributed to the other candidates. If there is still nobody with more than half the votes, another candidate is eliminated, and the process is repeated until either someone has a majority or there are only two candidates left, in which case the one with the most votes wins. Third, fourth, and lower preferences will be redistributed if a voter’s higher preferences have already been transferred to candidates who were eliminated earlier.
  • At first glance, this is an appealing approach: it is guaranteed to produce a clear winner, and more voters will have a say in the election’s outcome. Look more closely, though, and you start to see how peculiar the logic behind it is. Although more people’s votes contribute to the result, they do so in strange ways. Some people’s second, third, or even lower preferences count for as much as other people’s first preferences. If you back the loser of the first tally, then in the subsequent tallies your second (and maybe lower) preferences will be added to that candidate’s first preferences. The winner’s pile of votes may well be a jumble of first, second, and third preferences.
  • Such transferrable-vote elections can behave in topsy-turvy ways: they are what mathematicians call “non-monotonic,” which means that something can go up when it should go down, or vice versa. Whether a candidate who gets through the first round of counting will ultimately be elected may depend on which of his rivals he has to face in subsequent rounds, and some votes for a weaker challenger may do a candidate more good than a vote for that candidate himself. In short, a candidate may lose if certain voters back him, and would have won if they hadn’t. Supporters of instant-runoff voting say that the problem is much too rare to worry about in real elections, but recent work by Robert Norman, a mathematician at Dartmouth, suggests otherwise. By Norman’s calculations, it would happen in one in five close contests among three candidates who each have between twenty-five and forty per cent of first-preference votes. With larger numbers of candidates, it would happen even more often. It’s rarely possible to tell whether past instant-runoff elections have gone topsy-turvy in this way, because full ballot data aren’t usually published. But, in Burlington’s 2006 and 2009 mayoral elections, the data were published, and the 2009 election did go topsy-turvy.
  • Kenneth Arrow, an economist at Stanford, examined a set of requirements that you’d think any reasonable voting system could satisfy, and proved that nothing can meet them all when there are more than two candidates. So designing elections is always a matter of choosing a lesser evil. When the Royal Swedish Academy of Sciences awarded Arrow a Nobel Prize, in 1972, it called his result “a rather discouraging one, as regards the dream of a perfect democracy.” Szpiro goes so far as to write that “the democratic world would never be the same again,
  • There is something of a loophole in Arrow’s demonstration. His proof applies only when voters rank candidates; it would not apply if, instead, they rated candidates by giving them grades. First-past-the-post voting is, in effect, a crude ranking method in which voters put one candidate in first place and everyone else last. Similarly, in the standard forms of proportional representation voters rank one party or group of candidates first, and all other parties and candidates last. With rating methods, on the other hand, voters would give all or some candidates a score, to say how much they like them. They would not have to say which is their favorite—though they could in effect do so, by giving only him or her their highest score—and they would not have to decide on an order of preference for the other candidates.
  • One such method is widely used on the Internet—to rate restaurants, movies, books, or other people’s comments or reviews, for example. You give numbers of stars or points to mark how much you like something. To convert this into an election method, count each candidate’s stars or points, and the winner is the one with the highest average score (or the highest total score, if voters are allowed to leave some candidates unrated). This is known as range voting, and it goes back to an idea considered by Laplace at the start of the nineteenth century. It also resembles ancient forms of acclamation in Sparta. The more you like something, the louder you bash your shield with your spear, and the biggest noise wins. A recent variant, developed by two mathematicians in Paris, Michel Balinski and Rida Laraki, uses familiar language rather than numbers for its rating scale. Voters are asked to grade each candidate as, for example, “Excellent,” “Very Good,” “Good,” “Insufficient,” or “Bad.” Judging politicians thus becomes like judging wines, except that you can drive afterward.
  • Range and approval voting deal neatly with the problem of vote-splitting: if a voter likes Nader best, and would rather have Gore than Bush, he or she can approve Nader and Gore but not Bush. Above all, their advocates say, both schemes give voters more options, and would elect the candidate with the most over-all support, rather than the one preferred by the largest minority. Both can be modified to deliver forms of proportional representation.
  • Whether such ideas can work depends on how people use them. If enough people are carelessly generous with their approval votes, for example, there could be some nasty surprises. In an unlikely set of circumstances, the candidate who is the favorite of more than half the voters could lose. Parties in an approval election might spend less time attacking their opponents, in order to pick up positive ratings from rivals’ supporters, and critics worry that it would favor bland politicians who don’t stand for anything much. Defenders insist that such a strategy would backfire in subsequent elections, if not before, and the case of Ronald Reagan suggests that broad appeal and strong views aren’t mutually exclusive.
  • Why are the effects of an unfamiliar electoral system so hard to puzzle out in advance? One reason is that political parties will change their campaign strategies, and voters the way they vote, to adapt to the new rules, and such variables put us in the realm of behavior and culture. Meanwhile, the technical debate about electoral systems generally takes place in a vacuum from which voters’ capriciousness and local circumstances have been pumped out. Although almost any alternative voting scheme now on offer is likely to be better than first past the post, it’s unrealistic to think that one voting method would work equally well for, say, the legislature of a young African republic, the Presidency of an island in Oceania, the school board of a New England town, and the assembly of a country still scarred by civil war. If winner takes all is a poor electoral system, one size fits all is a poor way to pick its replacements.
  • Mathematics can suggest what approaches are worth trying, but it can’t reveal what will suit a particular place, and best deliver what we want from a democratic voting system: to create a government that feels legitimate to people—to reconcile people to being governed, and give them reason to feel that, win or lose (especially lose), the game is fair.
  •  
    WIN OR LOSE No voting system is flawless. But some are less democratic than others. by Anthony Gottlieb
Weiye Loh

Skepticblog » The Decline Effect - 0 views

  • The first group are those with an overly simplistic or naive sense of how science functions. This is a view of science similar to those films created in the 1950s and meant to be watched by students, with the jaunty music playing in the background. This view generally respects science, but has a significant underappreciation for the flaws and complexity of science as a human endeavor. Those with this view are easily scandalized by revelations of the messiness of science.
  • The second cluster is what I would call scientific skepticism – which combines a respect for science and empiricism as a method (really “the” method) for understanding the natural world, with a deep appreciation for all the myriad ways in which the endeavor of science can go wrong. Scientific skeptics, in fact, seek to formally understand the process of science as a human endeavor with all its flaws. It is therefore often skeptics pointing out phenomena such as publication bias, the placebo effect, the need for rigorous controls and blinding, and the many vagaries of statistical analysis. But at the end of the day, as complex and messy the process of science is, a reliable picture of reality is slowly ground out.
  • The third group, often frustrating to scientific skeptics, are the science-deniers (for lack of a better term). They may take a postmodernist approach to science – science is just one narrative with no special relationship to the truth. Whatever you call it, what the science-deniers in essence do is describe all of the features of science that the skeptics do (sometimes annoyingly pretending that they are pointing these features out to skeptics) but then come to a different conclusion at the end – that science (essentially) does not work.
  • ...13 more annotations...
  • this third group – the science deniers – started out in the naive group, and then were so scandalized by the realization that science is a messy human endeavor that the leap right to the nihilistic conclusion that science must therefore be bunk.
  • The article by Lehrer falls generally into this third category. He is discussing what has been called “the decline effect” – the fact that effect sizes in scientific studies tend to decrease over time, sometime to nothing.
  • This term was first applied to the parapsychological literature, and was in fact proposed as a real phenomena of ESP – that ESP effects literally decline over time. Skeptics have criticized this view as magical thinking and hopelessly naive – Occam’s razor favors the conclusion that it is the flawed measurement of ESP, not ESP itself, that is declining over time. 
  • Lehrer, however, applies this idea to all of science, not just parapsychology. He writes: And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
  • Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!
  • Lehrer is describing the cutting edge or the fringe of science, and then acting as if it applies all the way down to the core. I think the problem is that there is so much scientific knowledge that we take for granted – so much so that we forget it is knowledge that derived from the scientific method, and at one point was not known.
  • It is telling that Lehrer uses as his primary examples of the decline effect studies from medicine, psychology, and ecology – areas where the signal to noise ratio is lowest in the sciences, because of the highly variable and complex human element. We don’t see as much of a decline effect in physics, for example, where phenomena are more objective and concrete.
  • If the truth itself does not “wear off”, as the headline of Lehrer’s article provocatively states, then what is responsible for this decline effect?
  • it is no surprise that effect science in preliminary studies tend to be positive. This can be explained on the basis of experimenter bias – scientists want to find positive results, and initial experiments are often flawed or less than rigorous. It takes time to figure out how to rigorously study a question, and so early studies will tend not to control for all the necessary variables. There is further publication bias in which positive studies tend to be published more than negative studies.
  • Further, some preliminary research may be based upon chance observations – a false pattern based upon a quirky cluster of events. If these initial observations are used in the preliminary studies, then the statistical fluke will be carried forward. Later studies are then likely to exhibit a regression to the mean, or a return to more statistically likely results (which is exactly why you shouldn’t use initial data when replicating a result, but should use entirely fresh data – a mistake for which astrologers are infamous).
  • skeptics are frequently cautioning against new or preliminary scientific research. Don’t get excited by every new study touted in the lay press, or even by a university’s press release. Most new findings turn out to be wrong. In science, replication is king. Consensus and reliable conclusions are built upon multiple independent lines of evidence, replicated over time, all converging on one conclusion.
  • Lehrer does make some good points in his article, but they are points that skeptics are fond of making. In order to have a  mature and functional appreciation for the process and findings of science, it is necessary to understand how science works in the real world, as practiced by flawed scientists and scientific institutions. This is the skeptical message.
  • But at the same time reliable findings in science are possible, and happen frequently – when results can be replicated and when they fit into the expanding intricate weave of the picture of the natural world being generated by scientific investigation.
Weiye Loh

Turning Privacy "Threats" Into Opportunities - Esther Dyson - Project Syndicate - 0 views

  • ost disclosure statements are not designed to be read; they are designed to be clicked on. But some companies actually want their customers to read and understand the statements. They don’t want customers who might sue, and, just in case, they want to be able to prove that the customers did understand the risks. So the leaders in disclosure statements right now tend to be financial and health-care companies – and also space-travel and extreme-sports vendors. They sincerely want to let their customers know what they are getting into, because a regretful customer is a vengeful one. That means making disclosure statements readable. I would suggest turning them into a quiz. The user would not simply click a single button, but would have to select the right button for each question. For example: What are my chances of dying in space? A) 5% B) 30% C) 1-4% (the correct answer, based on experience so far; current spacecraft are believed to be safer.) Now imagine: Who can see my data? A) I can. B) XYZ Corporation. C) XYZ Corporation’s marketing partners. (Click here to see the list.) D) XYZ Corporation’s affiliates and anyone it chooses. As the customer picks answers, she gets a good idea of what is going on. In fact, if you're a marketer, why not dispense with a single right answer and let the consumer specify what she wants to have happen with her data (and corresponding privileges/access rights if necessary)? That’s much more useful than vague policy statements. Suddenly, the disclosure statement becomes a consumer application that adds value to the vendor-consumer relationship.
  • And show the data themselves rather than a description.
  • this is all very easy if you are the site with which the user communicates directly; it is more difficult if you are in the background, a third party collecting information surreptitiously. But that practice should be stopped, anyway.
  • ...4 more annotations...
  • just as they have with Facebook, users will become more familiar with the idea of setting their own privacy preferences and managing their own data. Smart vendors will learn from Facebook; the rest will lose out to competitors. Visualizing the user's information and providing an intelligible interface is an opportunity for competitive advantage.
  • I see this happening already with a number of companies, including some with which I am involved. For example, in its research surveys, 23andMe asks people questions such as how often they have headaches or whether they have ever been exposed to pesticides, and lets them see (in percentages) how other 23andMe users answer the question. This kind of information is fascinating to most people. TripIt lets you compare and match your own travel plans with those of friends. Earndit lets you compete with others to exercise more and win points and prizes.
  • Consumers increasingly expect to be able to see themselves both as individuals and in context. They will feel more comfortable about sharing data if they feel confident that they know what is shared and what is not. The online world will feel like a well-lighted place with shops, newsstands, and the like, where you can see other people and they can see you. Right now, it more often feels like lurking in a spooky alley with a surveillance camera overlooking the scene.
  • Of course, there will be “useful” data that an individual might not want to share – say, how much alcohol they buy, which diseases they have, or certain of their online searches. They will know how to keep such information discreet, just as they might close the curtains to get undressed in their hotel room after enjoying the view from the balcony. Yes, living online takes a little more thought than living offline. But it is not quite as complex once Internet-based services provide the right tools – and once awareness and control of one’s own data become a habit.
  •  
    companies see consumer data as something that they can use to target ads or offers, or perhaps that they can sell to third parties, but not as something that consumers themselves might want. Of course, this is not an entirely new idea, but most pundits on both sides - privacy advocates and marketers - don't realize that rather than protecting consumers or hiding from them, companies should be bringing them into the game. I believe that successful companies will turn personal data into an asset by giving it back to their customers in an enhanced form. I am not sure exactly how this will happen, but current players will either join this revolution or lose out.
Jiamin Lin

Firms allowed to share private data - 0 views

  •  
    Companies who request for their customer's private information may in turn distribute these confidential particulars to others. As such, cases of fraud and identity theft have surfaced, with fraudsters using these distributed identities to apply for loans or credit cards. Unlike other countries, no privacy law to safeguard an individual's data against unauthorized commercial use has been put in place. As a result, fraudsters are able to ride on this loophole. Ethical Question: Is it right for companies to request for their customer's private information for certain reasons? Is it even fair that they distribute these information to third parties, perhaps as a way to make money? Problem: I think the main problem is that there isn't a law in Singapore that safeguards an individual's data against unauthorized commercial use. Even though the Model Data Protection Code scheme tries to do the above, it is after all, still a voluntary scheme. Companies can opt to adopt the scheme, but whether they choose to apply it regularly, is another issue. As long as a privacy law is not in place, this issue will continue to recur in Singapore.
Weiye Loh

Balderdash - 0 views

  • A letter Paul wrote to complain about the "The Dead Sea Scrolls" exhibition at the Arts House:To Ms. Amira Osman (Marketing and Communications Manager),cc.Colin Goh, General Manager,Florence Lee, Depury General ManagerDear Ms. Osman,I visited the Dead Sea Scrolls “exhibition” today with my wife. Thinking that it was from a legitimate scholarly institute or (how naïve of me!) the Israel Antiquities Authority, I was looking forward to a day of education and entertainment.Yet when I got it, much of the exhibition (and booklets) merely espouses an evangelical (fundamentalist) view of the Bible – there are booklets on the inerrancy of the Bible, on how archaeology has proven the Bible to be true etc.Apart from these there are many blatant misrepresentations of the state of archaeology and mainstream biblical scholarship:a) There was initial screening upon entry of a 5-10 minute pseudo-documentary on the Dead Sea Scrolls. A presenter (can’t remember the name) was described as a “biblical archaeologist” – a term that no serious archaeologist working in the Levant would apply to him or herself. (Some prefer the term “Syro-Palestinian archaeologist” but almost all reject the term “biblical archaeologist”). See the book by Thomas W. Davis, “Shifting Sands: The Rise and Fall of Biblical Archaeology”, Oxford, New York 2004. Davis is an actual archaeologist working in the field and the book tells why the term “Biblical archaeologist” is not considered a legitimate term by serious archaeologist.b) In the same presentation, the presenter made the erroneous statement that the entire old testament was translated into Greek in the third century BCE. This is a mistake – only the Pentateuch (the first five books of the Old Testament) was translated during that time. Note that this ‘error’ is not inadvertent but is a familiar claim by evangelical apologists who try to argue for an early date of all the books of the Old testament - if all the books have been translated by the third century BCE obviously these books must all have been written before then! This flies against modern scholarship which show that some books in the Old Testament such as the Book of Daniel was written only in the second century BCE]The actual state of scholarship on the Septuagint [The Greek translation of the Bible] is accurately given in the book by Ernst Würthwein, “The Text of the Old Testament” – Eerdmans 1988 pp.52-54c) Perhaps the most blatant error was one which claimed that the “Magdalene fragments” – which contains the 26th chapter of the Gospel of Matthew is dated to 50 AD!!! Scholars are unanimous in dating these fragments to 200 AD. The only ‘scholar’ cited that dated these fragments to 50 AD was the German papyrologist Carsten Thiede – a well know fundamentalist. This is what Burton Mack (a critical – legitimate – NT scholar) has to say about Thiede’s eccentric dating “From a critical scholar's point of view, Thiede's proposal is an example of just how desperate the Christian imagination can become in the quest to argue for the literal facticity of the Christian gospels” [Mack, Burton L., “Who Wrote the New Testament?:The Making of the Christian Myth” HarperCollins, San Francisco 1995] Yet the dating of 50 AD is presented as though it is a scholarly consensus position!In fact the last point was so blatant that I confronted the exhibitors. (Tak Boleh Tahan!!) One American exhibitor told me that “Yes, it could have been worded differently, but then we would have to change the whole display” (!!). When I told him that this was not a typo but a blatant attempt to deceive, he mentioned that Theide’s views are supported by “The Dallas Theological Seminary” – another well know evangelical institute!I have no issue with the religious strengthening their faith by having their own internal exhibitions on historical artifacts etc. But when it is presented to the public as a scholarly exhibition – this is quite close to being dishonest.I felt cheated of the $36 dollars I paid for the tickets and of the hour that I spent there before realizing what type of exhibition it was.I am disappointed with The Art House for show casing this without warning potential visitors of its clear religious bias.Yours sincerely,Paul TobinTo their credit, the Arts House speedily replied.
    • Weiye Loh
       
      The issue of truth is indeed so maddening. Certainly, the 'production' of truth has been widely researched and debated by scholars. Spivak for example, argued for the deconstruction by means of questioning the privilege of identity so that someone is believed to have the truth. And along the same line, albeit somewhat misunderstood I feel, It was mentioned in class that somehow people who are oppressed know better.
qiyi liao

Amazon targeted in class action over vanishing e-books - 0 views

  •  
    Issue in contention: Amazon deleted legally purchased e-books from Kindle users without prior notice, after learning that these e-books were pirated versions. This ability of Amazon's to "remotely delete digital content purchased through the Kindle store" was never disclosed to its paying customers. In fact, its license terms seem to offer Kindle users permanent access to the files they purchase (see #). Sure, Amazon admits mishandling the issue and promises never to remove content in such circumstances again. However, ultimately, they still own that power to remove, edit content etc. What effects would that have on our society then? Consider Orwell's notion of Big Brother in "1984" (Creepily, one of the books that was removed in this mini-scandal). Also, who is/should Amazon be more accountable to? Its customers? Shareholders? Third-party publishers? (At the end of the day, it's still a profit-seeking corporation.) NB. Kindle is a platform developed by Amazon for reading e-books and other digital media. #Upon your payment of the applicable fees set by Amazon, Amazon grants you the non-exclusive right to keep a permanent copy of the applicable Digital Content and to view, use, and display such Digital Content an unlimited number of times, solely on the Device or as authorized by Amazon as part of the Service and solely for your personal, non-commercial use.
Weiye Loh

How wise are crowds? - 0 views

  • n the past, economists trying to model the propagation of information through a population would allow any given member of the population to observe the decisions of all the other members, or of a random sampling of them. That made the models easier to deal with mathematically, but it also made them less representative of the real world.
    • Weiye Loh
       
      Random sampling is not representative
  • this paper does is add the important component that this process is typically happening in a social network where you can’t observe what everyone has done, nor can you randomly sample the population to find out what a random sample has done, but rather you see what your particular friends in the network have done,” says Jon Kleinberg, Tisch University Professor in the Cornell University Department of Computer Science, who was not involved in the research. “That introduces a much more complex structure to the problem, but arguably one that’s representative of what typically happens in real settings.”
    • Weiye Loh
       
      So random sampling is actually more accurate?
  • Earlier models, Kleinberg explains, indicated the danger of what economists call information cascades. “If you have a few crucial ingredients — namely, that people are making decisions in order, that they can observe the past actions of other people but they can’t know what those people actually knew — then you have the potential for information cascades to occur, in which large groups of people abandon whatever private information they have and actually, for perfectly rational reasons, follow the crowd,”
  • ...8 more annotations...
  • The MIT researchers’ paper, however, suggests that the danger of information cascades may not be as dire as it previously seemed.
  • a mathematical model that describes attempts by members of a social network to make binary decisions — such as which of two brands of cell phone to buy — on the basis of decisions made by their neighbors. The model assumes that for all members of the population, there is a single right decision: one of the cell phones is intrinsically better than the other. But some members of the network have bad information about which is which.
  • The MIT researchers analyzed the propagation of information under two different conditions. In one case, there’s a cap on how much any one person can know about the state of the world: even if one cell phone is intrinsically better than the other, no one can determine that with 100 percent certainty. In the other case, there’s no such cap. There’s debate among economists and information theorists about which of these two conditions better reflects reality, and Kleinberg suggests that the answer may vary depending on the type of information propagating through the network. But previous models had suggested that, if there is a cap, information cascades are almost inevitable.
  • if there’s no cap on certainty, an expanding social network will eventually converge on an accurate representation of the state of the world; that wasn’t a big surprise. But they also showed that in many common types of networks, even if there is a cap on certainty, convergence will still occur.
  • people in the past have looked at it using more myopic models,” says Acemoglu. “They would be averaging type of models: so my opinion is an average of the opinions of my neighbors’.” In such a model, Acemoglu says, the views of people who are “oversampled” — who are connected with a large enough number of other people — will end up distorting the conclusions of the group as a whole.
  • What we’re doing is looking at it in a much more game-theoretic manner, where individuals are realizing where the information comes from. So there will be some correction factor,” Acemoglu says. “If I’m seeing you, your action, and I’m seeing Munzer’s action, and I also know that there is some probability that you might have observed Munzer, then I discount his opinion appropriately, because I know that I don’t want to overweight it. And that’s the reason why, even though you have these influential agents — it might be that Munzer is everywhere, and everybody observes him — that still doesn’t create a herd on his opinion.”
  • the new paper leaves a few salient questions unanswered, such as how quickly the network will converge on the correct answer, and what happens when the model of agents’ knowledge becomes more complex.
  • the MIT researchers begin to address both questions. One paper examines rate of convergence, although Dahleh and Acemoglu note that that its results are “somewhat weaker” than those about the conditions for convergence. Another paper examines cases in which different agents make different decisions given the same information: some people might prefer one type of cell phone, others another. In such cases, “if you know the percentage of people that are of one type, it’s enough — at least in certain networks — to guarantee learning,” Dahleh says. “I don’t need to know, for every individual, whether they’re for it or against it; I just need to know that one-third of the people are for it, and two-thirds are against it.” For instance, he says, if you notice that a Chinese restaurant in your neighborhood is always half-empty, and a nearby Indian restaurant is always crowded, then information about what percentages of people prefer Chinese or Indian food will tell you which restaurant, if either, is of above-average or below-average quality.
  •  
    By melding economics and engineering, researchers show that as social networks get larger, they usually get better at sorting fact from fiction.
Weiye Loh

Basqueresearch.com: News - PhD thesis warns of risk of delegating to just a few teacher... - 0 views

  • the incorporation of Information and Communication Technologies into Primary Education brought with it positive changes in the role of the teacher and the student. Teachers and students stopped being mere transmitters and receptors, respectively. The first became mediators of information and the second opted for learning through investigating, discovering and presenting ideas to classmates and teachers. In this way they have, at the same time, the opportunity of getting to know the work of other students, too. Thus, the use of Internet and ICTs reinforce participation and collaboration in the school. According to Dr Altuna, it also helps to boost learning models that are more constructivist, socio-constructivist and even connectivist.
  • Despite its educational possibilities the researcher warns that there are numerous factors that limit the incorporation of Internet into the teaching of the curricular subject in question. These involve aspects such as the time dedicated weekly, technological and computer facilities, accessibility and connection to Internet, the school curriculum and, above all, the knowledge, training and involvement of the teaching staff.
  • the thesis observed a tendency to delegate responsibility for ICT in the school to those teachers who were considered to be “computer experts”. Dr Altuna warns of the risks that this practice runs, as thereby the rest of the staff continues to be untrained and unable to apply ICT and Internet in activities undertaken within their curricular subject. It has to be stressed, therefore, that all should be responsible for the educational measures to be taken so that students acquire digital skills. Also observed was the need for a pedagogic approach to ICT which advises the teaching staff on knowledge about and putting into practice activities in educational innovation.
  • ...2 more annotations...
  • Dr Altuna not only includes the lack of involvement of teaching staff amongst the limitations for incorporating ICT, but also that of the involvement of the families. It was explained that families showed interest in the use of Internet and ICTs as educational tools for their children, but that these, too, excessively delegate to the schools. The researcher stressed that the families also need guidance, as they are concerned about the use by their children of Internet but do not know the best way to go about the problem.
  • Educational psychologist Dr Jon Altuna has carried out a thorough study of the phenomenon of the school 2.0. Concretely, he has looked into the use and level of incorporation of Internet and of Information and Communication Technologies (ICT) into the third cycle of Primary Education, observing at the same time the attitudes of the teaching staff, and of the students and the families of the children in this regard. His PhD, defended at the University of the Basque Country (UPV/EHU), is entitled, Incorporation of Internet into the teaching of the subject Knowledge of the Environment during the third cycle of Primary Education: possibilities and analysis of the situation of a school. Dr Altuna’s research is based on a study of cases undertaken over eight years at a school where new activities involving ICT had been introduced into the curricular subject of Knowledge of the Environment, taught in the fifth and sixth year of Primary Education. The researcher gathered data from 837 students, 134 teachers and 190 families of this school. This study was completed with the experiences of ICT teachers from 21 schools.
  •  
    Despite its educational possibilities the researcher warns that there are numerous factors that limit the incorporation of Internet into the teaching of the curricular subject in question. These involve aspects such as the time dedicated weekly, technological and computer facilities, accessibility and connection to Internet, the school curriculum and, above all, the knowledge, training and involvement of the teaching staff.
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

Do Americans trust the news media? (OneNewsNow.com) - 1 views

  • newly released poll by Sacred Heart Universitiy. The SHU Polling Institute recently conducted its third survey on "Trust and Satisfaction with the National News Media."  It's a national poll intended to answer the question of whether Americans trust the news media.  In a nutshell, the answer is a resounding "No!"
  • Pollsters asked which television news organizations people turned to most frequently.  CBS News didn't even make the top five!  Who did?  Fox News was first by a wide margin of 28.4 percent.  CNN was second, chosen by 14.9 percent.  NBC News, ABC News, and "local news" followed, while CBS News lagged way behind with only 7.4 percent.
  • On the question of media bias, a whopping 83.6 percent agree that national news media organizations are "very or somewhat biased." 
  • ...3 more annotations...
  • Which media outlet is most trusted to be accurate today?  Again, Fox News took first place with a healthy margin of 30 percent.  CNN followed with 19.5 percent, NBC News with 7.5 percent, and ABC News with 7.5 percent.
  • we see a strong degree of polarization and political partisanship in the country in general, we see a similar trend in the media."  That probably explains why Fox News is also considered the least trusted, according to the SHU poll.  Viewers seem to either love or hate Fox News.
    • Weiye Loh
       
      So is Fox News the most trusted or the least trusted according to the SHU poll? Or both? And if it's both, how exactly is the survey carried out? Aren't survey options supposed to be mutually exclusive and exhaustive? Maybe SHU has no course on research methods.
  • only 24.3 percent of the SHU respondents say they believe "all or most news media reporting."  They also overwhelmingly (86.6 percent) believe "that the news media have their own political and public policy positions and attempt to influence public opinion." 
    • Weiye Loh
       
      They believe that media attempts to influence. But they also believe that media is biased. Logically then, they don't trust and believe the media. Does that mean that media has no influence? If so, why are they worried then? Third-person perception? Or they simply believe that they are holier-than-thou? Are they really more objective? What is objectivity anyway if not a social construst.
  •  
    One biased news source reporting about the biasness of other news sources. Shows that (self-)reflexivity is key in reading.
Satveer

Why I hate stem-cell technologies & Regenerative Therapies - 5 views

http://news.bbc.co.uk/2/hi/health/8314442.stm This article is another one of those regenerative therapies article that use of stem-cell technology to reverse aging because first world countries ar...

stem cell regenerative first world third

started by Satveer on 21 Oct 09 no follow-up yet
Weiye Loh

Sony's Stringer 'sorry' over data breach - 0 views

  • Sony has worked to strengthen its information security systems, 'placing our highest priority on ensuring the security of our customers' personal information, and regaining their trust.' The Japanese electronics and entertainment giant has faced a series of cyber attacks and said more than 100 million accounts have been affected, making it one of the largest data breaches in the history of the Internet. Analysts say costs associated with the breach could be as much as US$1 billion (S$1.24 billion), but deeper damage to Sony's brand image could undermine efforts to link its gadgets to an online network of games, movies and music.
  •  
    SONY chairman and president Howard Stringer on Tuesday apologised to shareholders and customers over a massive data leak, which helped push its its share price to a two-year low this month. 'In April, we faced a serious challenge in the form of a cyber attack launched against the PlayStation Network, Qriocity and the network systems of Sony Online Entertainment,' Mr Stringer said at a meeting in Tokyo attended by about 5,900 shareholders. 'We are sorry for any concern and inconvenience that the incidents may have caused our shareholders, customers and stakeholders,' he said. The company is expecting its third-straight annual loss this year.
Weiye Loh

Skepticblog » Why are textbooks so expensive? - 0 views

  • As an author, I’ve seen how the sales histories of textbooks work. Typically they have a big spike of sales for the first 1-2 years after they are introduced, and that’s when most the new copies are sold and most of the publisher’s money is made. But by year 3  (and sometimes sooner), the sales plunge and within another year or two, the sales are miniscule. The publishers have only a few options in a situation like this. One option: they can price the book so that the first two years’ worth of sales will pay their costs back before the used copies wipe out their market, which is the major reason new copies cost so much. Another option (especially with high-volume introductory textbooks) is to revise it within 2-3 years after the previous edition, so the new edition will drive all the used copies off the shelves for another two years or so. This is also a common strategy. For my most popular books, the publisher expected me to be working on a new edition almost as soon as the previous edition came out, and 2-3 years later, the new edition (with a distinctive new cover, and sometimes with significant new content as well) starts the sales curve cycle all over again. One of my books is in its eighth edition, but there are introductory textbooks that are in the 15th or 20th edition.
  • For over 20 years now, I’ve heard all sorts of prophets saying that paper textbooks are dead, and predicting that all textbooks would be electronic within a few years. Year after year, I  hear this prediction—and paper textbooks continue to sell just fine, thank you.  Certainly, electronic editions of mass market best-sellers, novels and mysteries (usually cheaply produced with few illustrations) seem to do fine as Kindle editions or eBooks, and that market is well established. But electronic textbooks have never taken off, at least in science textbooks, despite numerous attempts to make them work. Watching students study, I have a few thoughts as to why this is: Students seem to feel that they haven’t “studied” unless they’ve covered their textbook with yellow highlighter markings. Although there are electronic equivalents of the highlighter marker pen, most of today’s students seem to prefer physically marking on a real paper book. Textbooks (especially science books) are heavy with color photographs and other images that don’t often look good on a tiny screen, don’t print out on ordinary paper well, but raise the price of the book. Even an eBook is going to be a lot more expensive with lots of images compared to a mass-market book with no art whatsoever. I’ve watched my students study, and they like the flexibility of being able to use their book just about anywhere—in bright light outdoors away from a power supply especially. Although eBooks are getting better, most still have screens that are hard to read in bright light, and eventually their battery will run out, whether you’re near a power supply or not. Finally, if  you drop your eBook or get it wet, you have a disaster. A textbook won’t even be dented by hard usage, and unless it’s totally soaked and cannot be dried, it does a lot better when wet than any electronic book.
  • A recent study found that digital textbooks were no panacea after all. Only one-third of the students said they were comfortable reading e-textbooks, and three-fourths preferred a paper textbook to an e-textbook if the costs were equal. And the costs have hidden jokers in the deck: e-textbooks may seem cheaper, but they tend to have built-in expiration dates and cannot be resold, so they may be priced below paper textbooks but end up costing about the same. E-textbooks are not that much cheaper for publishers, either, since the writing, editing, art manuscript, promotion, etc., all cost the publisher the same whether the final book is in paper or electronic. The only cost difference is printing and binding and shipping and storage vs. creating the electronic version.
  •  
    But in the 1980s and 1990s, the market changed drastically with the expansion of used book recyclers. They set up shop at the bookstore door near the end of the semester and bought students' new copies for pennies on the dollar. They would show up in my office uninvited and ask if I want to sell any of the free adopter's copies that I get from publishers trying to entice me. If you walk through any campus bookstore, nearly all the new copies have been replaced by used copies, usually very tattered and with broken spines. The students naturally gravitate to the cheaper used books (and some prefer them because they like it if a previous owner has highlighted the important stuff). In many bookstores, there are no new copies at all, or just a few that go unsold. What these bargain hunters don't realize is that every used copy purchased means a new copy unsold. Used copies pay nothing to the publisher (or the author, either), so to recoup their costs, publishers must price their new copies to offset the loss of sales by used copies. And so the vicious circle begins-publisher raises the price on the book again, more students buy used copies, so a new copy keeps climbing in price.
1 - 20 of 68 Next › Last »
Showing 20 items per page