Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Excel

Rss Feed Group items tagged

Weiye Loh

Roger Pielke Jr.'s Blog: Climate Science Turf Wars and Carbon Dioxide Myopia - 0 views

  • Presumably by "climate effect" Caldeira means the long-term consequences of human actions on the global climate system -- that is, climate change. Going unmentioned by Caldeira is the fact that there are also short-term climate effects, and among those, the direct health effects of non-carbon dioxide emissions on human health and agriculture.
  • There are a host of reasons to worry about the climatic effects of  non-CO2 forcings beyond long-term climate change.  Shindell explains this point: There is also a value judgement inherent in any suggestion that CO2 is the only real forcer that matters or that steps to reduce soot and ozone are ‘almost meaningless’. Based on CO2’s long residence time in the atmosphere, it dominates long-term committed forcing. However, climate changes are already happening and those alive today are feeling the effects now and will continue to feel them during the next few decades, but they will not be around in the 22nd century. These climate changes have significant impacts. When rainfall patterns shift, livelihoods in developing countries can be especially hard hit. I suspect that virtually all farmers in Africa and Asia are more concerned with climate change over the next 40 years than with those after 2050. Of course they worry about the future of their children and their children’s children, but providing for their families now is a higher priority. . . However, saying CO2 is the only thing that matters implies that the near-term climate impacts I’ve just outlined have no value at all, which I don’t agree with. What’s really meant in a comment like “if one’s goal is to limit climate change, one would always be better off spending the money on immediate reduction of CO2 emissions’ is ‘if one’s goal is limiting LONG-TERM climate change”. That’s a worthwhile goal, but not the only goal.
  • The UNEP report notes that action on carbon dioxide is not going to have a discernible influence on the climate system until perhaps mid-century (see the figure at the top of this post).  Consequently, action on non-carbon dioxide forcings is very much independent of action on carbon dioxide -- they address climatic causes and consequences on very different timescales, and thus probably should not even be conflated to begin with. UNEP writes: In essence, the near-term CH4 and BC measures examined in this Assessment are effectively decoupled from the CO2 measures both in that they target different source sectors and in that their impacts on climate change take place over different timescales.Advocates for action on carbon dioxide are quick to frame discussions narrowly in terms of long-term climate change and the primary role of carbon dioxide. Indeed, accumulating carbon dioxide is a very important issue (consider that my focus in The Climate Fix is carbon dioxide, but I also emphasize that the carbon dioxide issue is not the same thing as climate change), but it is not the only issue.
  • ...2 more annotations...
  • perhaps the difference in opinions on this subject expressed by Shindell and Caldeira is nothing more than an academic turf battle over what it means for policy makers to focus on "climate" -- with one wanting the term (and justifications for action invoking that term) to be reserved for long-term climate issues centered on carbon dioxide and the other focused on a broader definition of climate and its impacts.  If so, then it is important to realize that such turf battles have practical consequences. Shindell's breath of fresh air gets the last word with his explanation why it is that we must consider long- and short- term climate impacts at the same time, and how we balance them will reflect a host of non-scientific considerations: So rather than set one against the other, I’d view this as analogous to research on childhood leukemia versus Alzheimer’s. If you’re an advocate for child’s health, you may care more about the former, and if you’re a retiree you might care more about the latter. One could argue about which is most worthy based on number of cases, years of life lost, etc., but in the end it’s clear that both diseases are worth combating and any ranking of one over the other is a value judgement. Similarly, there is no scientific basis on which to decide which impacts of climate change are most important, and we can only conclude that both controls are worthwhile. The UNEP/WMO Assessment provides clear information on the benefits of short-lived forcer reductions so that decision-makers, and society at large, can decide how best to use limited resources.
  • If we eliminated emissions of methane and black carbon, but did nothing about carbon dioxide we would have delayedThis presupposes that CO2 emissions can be capped at current levels without economic devastation or that immediate economic devastation is warranted.
  •  
    Over at Dot Earth Andy Revkin has posted up two illuminating comments from climate scientists -- one from NASA's Drew Shindell and a response to it from Stanford's Ken Caldeira. Shindell's comment focuses on the impacts of action to mitigate the effects of black carbon, tropospheric ozone and other non-carbon dioxide human climate forcings, and comes from his perspective as lead author of an excellent UNEP report on the subject that is just out (here in PDF and the Economist has an excellent article here).  (Shindell's comment was apparently in response to an earlier Dot Earth comment by Raymond Pierrehumbert.) In contrast, Caldeira invokes long-term climate change to defend the importance of focusing on carbon dioxide:
Weiye Loh

BioMed Central | Full text | Mistaken Identifiers: Gene name errors can be introduced i... - 0 views

  • Background When processing microarray data sets, we recently noticed that some gene names were being changed inadvertently to non-gene names. Results A little detective work traced the problem to default date format conversions and floating-point format conversions in the very useful Excel program package. The date conversions affect at least 30 gene names; the floating-point conversions affect at least 2,000 if Riken identifiers are included. These conversions are irreversible; the original gene names cannot be recovered. Conclusions Users of Excel for analyses involving gene names should be aware of this problem, which can cause genes, including medically important ones, to be lost from view and which has contaminated even carefully curated public databases. We provide work-arounds and scripts for circumventing the problem.
Arthur Cane

Excellent SEO Service That Last - 1 views

I have been working with Syntactics Inc. for five years now, and I have entrusted my online business to them for that long because I found their services really excellent. In fact, for that five...

seo outsourcing services

started by Arthur Cane on 13 Dec 11 no follow-up yet
funeral adelaide

Excellent Funeral in Adelaide - 1 views

My entire family would like to thank Sensible Funerals for helping us out in preparing the funeral of my dearly departed grandmother. The funeral services that their professional funeral directors ...

Funeral directors Adelaide

started by funeral adelaide on 12 May 12 no follow-up yet
test and tagging

Excellent Test and Tagging in Adelaide - 1 views

I have been looking for a reliable electrical safety specialist to check on my electrical equipment which we have been using in my restaurant in Adelaide. After a week of searching, I finally found...

test and tagging

started by test and tagging on 24 Nov 11 no follow-up yet
Weiye Loh

Book Review: Future Babble by Dan Gardner « Critical Thinking « Skeptic North - 0 views

  • I predict that you will find this review informative. If you do, you will congratulate my foresight. If you don’t, you’ll forget I was wrong.
  • My playful intro summarizes the main thesis of Gardner’s excellent book, Future Babble: Why Expert Predictions Fail – and Why We Believe Them Anyway.
  • In Future Babble, the research area explored is the validity of expert predictions, and the primary researcher examined is Philip Tetlock. In the early 1980s, Tetlock set out to better understand the accuracy of predictions made by experts by conducting a methodologically sound large-scale experiment.
  • ...10 more annotations...
  • Gardner presents Tetlock’s experimental design in an excellent way, making it accessible to the lay person. Concisely, Tetlock examined 27450 judgments in which 284 experts were presented with clear questions whose answers could later be shown to be true or false (e.g., “Will the official unemployment rate be higher, lower or the same a year from now?”). For each prediction, the expert must answer clearly and express their degree of certainty as a percentage (e.g., dead certain = 100%). The usage of precise numbers adds increased statistical options and removes the complications of vague or ambiguous language.
  • Tetlock found the surprising and disturbing truth “that experts’ predictions were no more accurate than random guesses.” (p. 26) An important caveat is that there was a wide range of capability, with some experts being completely out of touch, and others able to make successful predictions.
  • What distinguishes the impressive few from the borderline delusional is not whether they’re liberal or conservative. Tetlock’s data showed political beliefs made no difference to an expert’s accuracy. The same is true of optimists and pessimists. It also made no difference if experts had a doctorate, extensive experience, or access to classified information. Nor did it make a difference if experts were political scientists, historians, journalists, or economists.” (p. 26)
  • The experts who did poorly were not comfortable with complexity and uncertainty, and tended to reduce most problems to some core theoretical theme. It was as if they saw the world through one lens or had one big idea that everything else had to fit into. Alternatively, the experts who did decently were self-critical, used multiple sources of information and were more comfortable with uncertainty and correcting their errors. Their thinking style almost results in a paradox: “The experts who were more accurate than others tended to be less confident they were right.” (p.27)
  • Gardner then introduces the terms ‘Hedgehog’ and ‘Fox’ to refer to bad and good predictors respectively. Hedgehogs are the ones you see pushing the same idea, while Foxes are likely in the background questioning the ability of prediction itself while making cautious proposals. Foxes are more likely to be correct. Unfortunately, it is Hedgehogs that we see on the news.
  • one of Tetlock’s findings was that “the bigger the media profile of an expert, the less accurate his predictions.” (p.28)
  • Chapter 2 – The Unpredictable World An exploration into how many events in the world are simply unpredictable. Gardner discusses chaos theory and necessary and sufficient conditions for events to occur. He supports the idea of actually saying “I don’t know,” which many experts are reluctant to do.
  • Chapter 3 – In the Minds of Experts A more detailed examination of Hedgehogs and Foxes. Gardner discusses randomness and the illusion of control while using narratives to illustrate his points à la Gladwell. This chapter provides a lot of context and background information that should be very useful to those less initiated.
  • Chapter 6 – Everyone Loves a Hedgehog More about predictions and how the media picks up hedgehog stories and talking points without much investigation into their underlying source or concern for accuracy. It is a good demolition of the absurdity of so many news “discussion shows.” Gardner demonstrates how the media prefer a show where Hedgehogs square off against each other, and it is important that these commentators not be challenged lest they become exposed and, by association, implicate the flawed structure of the program/network.Gardner really singles out certain people, like Paul Ehrlich, and shows how they have been wrong many times and yet can still get an audience.
  • “An assertion that cannot be falsified by any conceivable evidence is nothing more than dogma. It can’t be debated. It can’t be proven or disproven. It’s just something people choose to believe or not for reasons that have nothing to do with fact and logic. And dogma is what predictions become when experts and their followers go to ridiculous lengths to dismiss clear evidence that they failed.”
Weiye Loh

Designing Minds: Uncovered Video Profiles of Prominent Designers | Brain Pickings - 0 views

  • My favorite quote about what is art and what is design and what might be the difference comes from Donald Judd: ‘Design has to work, art doesn’t.’ And these things all have to work. They have a function outside my desire for self-expression.” ~ Stefan Sagmeister

  • When designers are given the opportunity to have a bigger role, real change, real transformation actually happens.” ~ Yves Behar

  •  
    In 2008, a now-defunct podcast program by Adobe called Designing Minds - not to be confused with frogdesign's excellent design mind magazine - did a series of video profiles of prominent artists and designers, including Stefan Sagmeister (whose Things I have learned in my life so far isn't merely one of the best-produced, most beautiful design books of the past decade, it's also a poignant piece of modern existential philosophy), Yves Behar (of One Laptop Per Child fame), Marian Bantjes (whose I Wonder remains my favorite typographic treasure) and many more, offering a rare glimpse of these remarkable creators' life stories, worldviews and the precious peculiarities that make them be who they are and create what they create
Weiye Loh

Net-Neutrality: The First Amendment of the Internet | LSE Media Policy Project - 0 views

  • debates about the nature, the architecture and the governing principles of the internet are not merely technical or economic discussions.  Above all, these debates have deep political, social, and cultural implications and become a matter of public, national and global interest.
  • In many ways, net neutrality could be considered the first amendment of the internet; no pun intended here. However, just as with freedom of speech the principle of net neutrality cannot be approached as absolute or as a fetish. Even in a democracy we cannot say everything applies all the time in all contexts. Limiting the core principle of freedom of speech in a democracy is only possible in very specific circumstances, such as harm, racism or in view of the public interest. Along the same lines, compromising on the principle of net neutrality should be for very specific and clearly defined reasons that are transparent and do not serve commercial private interests, but rather public interests or are implemented in view of guaranteeing an excellent quality of service for all.
  • One of the only really convincing arguments of those challenging net neutrality is that due to the dramatic increases in streaming activity and data-exchange through peer-to-peer networks, the overall quality of service risks being compromised if we stick to data being treated on a first come first serve basis. We are being told that popular content will need to be stored closer to the consumer, which evidently comes at an extra cost.
  • ...5 more annotations...
  • Implicitly two separate debates are being collapsed here and I would argue that we need to separate both. The first one relates to the stability of the internet as an information and communication infrastructure because of the way we collectively use that infrastructure. The second debate is whether ISPs and telecommunication companies should be allowed to differentiate in their pricing between different levels of quality of access, both towards consumers and content providers.
  • Just as with freedom of speech, circumstances can be found in which the principle while still cherished and upheld, can be adapted and constrained to some extent. To paraphrase Tim Wu (2008), the aspiration should still be ‘to treat all content, sites, and platforms equally’, but maybe some forms of content should be treated more equally than others in order to guarantee an excellent quality of service for all. However, the societal and political implications of this need to be thought through in detail and as with freedom of speech itself, it will, I believe, require strict regulation and conditions.
  • In regards to the first debate on internet stability, a case can be made for allowing internet operators to differentiate between different types of data with different needs – if for any reason the quality of service of the internet as a whole cannot be guaranteed anymore. 
  • Concerning the second debate on differential pricing, it is fair to say that from a public interest and civic liberty perspective the consolidation and institutionalization of a commercially driven two-tiered internet is not acceptable and impossible to legitimate. As is allowing operators to differentiate in the quality of provision of certain kind of content above others.  A core principle such as net neutrality should never be relinquished for the sake of private interests and profit-making strategies – on behalf of industry or for others. If we need to compromise on net neutrality it would always have to be partial, to be circumscribed and only to improve the quality of service for all, not just for the few who can afford it.
  • Separating these two debates exposes the crux of the current net-neutrality debate. In essence, we are being urged to give up on the principle of net-neutrality to guarantee a good quality of service.  However, this argument is actually a pre-text for the telecom industry to make content-providers pay for the facilitation of access to their audiences – the internet subscribers. And this again can be linked to another debate being waged amongst content providers: how do we make internet users pay for the content they access online? I won’t open that can of worms here, but I will make my point clear.  Telecommunication industry efforts to make content providers pay for access to their audiences do not offer legitimate reasons to suspend the first amendment of the internet.
Weiye Loh

The Death of Postmodernism And Beyond | Philosophy Now - 0 views

  • Most of the undergraduates who will take ‘Postmodern Fictions’ this year will have been born in 1985 or after, and all but one of the module’s primary texts were written before their lifetime. Far from being ‘contemporary’, these texts were published in another world, before the students were born: The French Lieutenant’s Woman, Nights at the Circus, If on a Winter’s Night a Traveller, Do Androids Dream of Electric Sheep? (and Blade Runner), White Noise: this is Mum and Dad’s culture. Some of the texts (‘The Library of Babel’) were written even before their parents were born. Replace this cache with other postmodern stalwarts – Beloved, Flaubert’s Parrot, Waterland, The Crying of Lot 49, Pale Fire, Slaughterhouse 5, Lanark, Neuromancer, anything by B.S. Johnson – and the same applies. It’s all about as contemporary as The Smiths, as hip as shoulder pads, as happening as Betamax video recorders. These are texts which are just coming to grips with the existence of rock music and television; they mostly do not dream even of the possibility of the technology and communications media – mobile phones, email, the internet, computers in every house powerful enough to put a man on the moon – which today’s undergraduates take for granted.
  • somewhere in the late 1990s or early 2000s, the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them.
  • Postmodernism, like modernism and romanticism before it, fetishised [ie placed supreme importance on] the author, even when the author chose to indict or pretended to abolish him or herself. But the culture we have now fetishises the recipient of the text to the degree that they become a partial or whole author of it. Optimists may see this as the democratisation of culture; pessimists will point to the excruciating banality and vacuity of the cultural products thereby generated (at least so far).
  • ...17 more annotations...
  • Pseudo-modernism also encompasses contemporary news programmes, whose content increasingly consists of emails or text messages sent in commenting on the news items. The terminology of ‘interactivity’ is equally inappropriate here, since there is no exchange: instead, the viewer or listener enters – writes a segment of the programme – then departs, returning to a passive role. Pseudo-modernism also includes computer games, which similarly place the individual in a context where they invent the cultural content, within pre-delineated limits. The content of each individual act of playing the game varies according to the particular player.
  • The pseudo-modern cultural phenomenon par excellence is the internet. Its central act is that of the individual clicking on his/her mouse to move through pages in a way which cannot be duplicated, inventing a pathway through cultural products which has never existed before and never will again. This is a far more intense engagement with the cultural process than anything literature can offer, and gives the undeniable sense (or illusion) of the individual controlling, managing, running, making up his/her involvement with the cultural product. Internet pages are not ‘authored’ in the sense that anyone knows who wrote them, or cares. The majority either require the individual to make them work, like Streetmap or Route Planner, or permit him/her to add to them, like Wikipedia, or through feedback on, for instance, media websites. In all cases, it is intrinsic to the internet that you can easily make up pages yourself (eg blogs).
  • Where once special effects were supposed to make the impossible appear credible, CGI frequently [inadvertently] works to make the possible look artificial, as in much of Lord of the Rings or Gladiator. Battles involving thousands of individuals have really happened; pseudo-modern cinema makes them look as if they have only ever happened in cyberspace.
  • Similarly, television in the pseudo-modern age favours not only reality TV (yet another unapt term), but also shopping channels, and quizzes in which the viewer calls to guess the answer to riddles in the hope of winning money.
  • The purely ‘spectacular’ function of television, as with all the arts, has become a marginal one: what is central now is the busy, active, forging work of the individual who would once have been called its recipient. In all of this, the ‘viewer’ feels powerful and is indeed necessary; the ‘author’ as traditionally understood is either relegated to the status of the one who sets the parameters within which others operate, or becomes simply irrelevant, unknown, sidelined; and the ‘text’ is characterised both by its hyper-ephemerality and by its instability. It is made up by the ‘viewer’, if not in its content then in its sequence – you wouldn’t read Middlemarch by going from page 118 to 316 to 401 to 501, but you might well, and justifiably, read Ceefax that way.
  • A pseudo-modern text lasts an exceptionally brief time. Unlike, say, Fawlty Towers, reality TV programmes cannot be repeated in their original form, since the phone-ins cannot be reproduced, and without the possibility of phoning-in they become a different and far less attractive entity.
  • If scholars give the date they referenced an internet page, it is because the pages disappear or get radically re-cast so quickly. Text messages and emails are extremely difficult to keep in their original form; printing out emails does convert them into something more stable, like a letter, but only by destroying their essential, electronic state.
  • The cultural products of pseudo-modernism are also exceptionally banal
  • Much text messaging and emailing is vapid in comparison with what people of all educational levels used to put into letters.
  • A triteness, a shallowness dominates all.
  • In music, the pseudo-modern supersedingof the artist-dominated album as monolithic text by the downloading and mix-and-matching of individual tracks on to an iPod, selected by the listener, was certainly prefigured by the music fan’s creation of compilation tapes a generation ago. But a shift has occurred, in that what was a marginal pastime of the fan has become the dominant and definitive way of consuming music, rendering the idea of the album as a coherent work of art, a body of integrated meaning, obsolete.
  • To a degree, pseudo-modernism is no more than a technologically motivated shift to the cultural centre of something which has always existed (similarly, metafiction has always existed, but was never so fetishised as it was by postmodernism). Television has always used audience participation, just as theatre and other performing arts did before it; but as an option, not as a necessity: pseudo-modern TV programmes have participation built into them.
  • Whereas postmodernism called ‘reality’ into question, pseudo-modernism defines the real implicitly as myself, now, ‘interacting’ with its texts. Thus, pseudo-modernism suggests that whatever it does or makes is what is reality, and a pseudo-modern text may flourish the apparently real in an uncomplicated form: the docu-soap with its hand-held cameras (which, by displaying individuals aware of being regarded, give the viewer the illusion of participation); The Office and The Blair Witch Project, interactive pornography and reality TV; the essayistic cinema of Michael Moore or Morgan Spurlock.
  • whereas postmodernism favoured the ironic, the knowing and the playful, with their allusions to knowledge, history and ambivalence, pseudo-modernism’s typical intellectual states are ignorance, fanaticism and anxiety
  • pseudo-modernism lashes fantastically sophisticated technology to the pursuit of medieval barbarism – as in the uploading of videos of beheadings onto the internet, or the use of mobile phones to film torture in prisons. Beyond this, the destiny of everyone else is to suffer the anxiety of getting hit in the cross-fire. But this fatalistic anxiety extends far beyond geopolitics, into every aspect of contemporary life; from a general fear of social breakdown and identity loss, to a deep unease about diet and health; from anguish about the destructiveness of climate change, to the effects of a new personal ineptitude and helplessness, which yield TV programmes about how to clean your house, bring up your children or remain solvent.
  • Pseudo-modernism belongs to a world pervaded by the encounter between a religiously fanatical segment of the United States, a largely secular but definitionally hyper-religious Israel, and a fanatical sub-section of Muslims scattered across the planet: pseudo-modernism was not born on 11 September 2001, but postmodernism was interred in its rubble.
  • pseudo-modernist communicates constantly with the other side of the planet, yet needs to be told to eat vegetables to be healthy, a fact self-evident in the Bronze Age. He or she can direct the course of national television programmes, but does not know how to make him or herself something to eat – a characteristic fusion of the childish and the advanced, the powerful and the helpless. For varying reasons, these are people incapable of the “disbelief of Grand Narratives” which Lyotard argued typified postmodernists
  •  
    Postmodern philosophy emphasises the elusiveness of meaning and knowledge. This is often expressed in postmodern art as a concern with representation and an ironic self-awareness. And the argument that postmodernism is over has already been made philosophically. There are people who have essentially asserted that for a while we believed in postmodern ideas, but not any more, and from now on we're going to believe in critical realism. The weakness in this analysis is that it centres on the academy, on the practices and suppositions of philosophers who may or may not be shifting ground or about to shift - and many academics will simply decide that, finally, they prefer to stay with Foucault [arch postmodernist] than go over to anything else. However, a far more compelling case can be made that postmodernism is dead by looking outside the academy at current cultural production.
Arthur Cane

Outstanding Team of SEO Specialists - 1 views

We have already tried a number of link builders and SEO services over the years and we were generally disappointed. Until we found our way to Syntactics Inc. I find their service great that is why,...

seo specialist specialists

started by Arthur Cane on 26 Jan 12 no follow-up yet
Weiye Loh

The Creativity Crisis - Newsweek - 0 views

  • The accepted definition of creativity is production of something original and useful, and that’s what’s reflected in the tests. There is never one right answer. To be creative requires divergent thinking (generating many unique ideas) and then convergent thinking (combining those ideas into the best result).
  • Torrance’s tasks, which have become the gold standard in creativity assessment, measure creativity perfectly. What’s shocking is how incredibly well Torrance’s creativity index predicted those kids’ creative accomplishments as adults.
  • The correlation to lifetime creative accomplishment was more than three times stronger for childhood creativity than childhood IQ.
  • ...20 more annotations...
  • there is one crucial difference between IQ and CQ scores. With intelligence, there is a phenomenon called the Flynn effect—each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.
  • creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward.
  • It is the scores of younger children in America—from kindergarten through sixth grade—for whom the decline is “most serious.”
  • It’s too early to determine conclusively why U.S. creativity scores are declining. One likely culprit is the number of hours kids now spend in front of the TV and playing videogames rather than engaging in creative activities. Another is the lack of creativity development in our schools. In effect, it’s left to the luck of the draw who becomes creative: there’s no concerted effort to nurture the creativity of all children.
  • Around the world, though, other countries are making creativity development a national priority.
  • In China there has been widespread education reform to extinguish the drill-and-kill teaching style. Instead, Chinese schools are also adopting a problem-based learning approach.
  • When faculty of a major Chinese university asked Plucker to identify trends in American education, he described our focus on standardized curriculum, rote memorization, and nationalized testing.
  • Overwhelmed by curriculum standards, American teachers warn there’s no room in the day for a creativity class.
  • The age-old belief that the arts have a special claim to creativity is unfounded. When scholars gave creativity tasks to both engineering majors and music majors, their scores laid down on an identical spectrum, with the same high averages and standard deviations.
  • The argument that we can’t teach creativity because kids already have too much to learn is a false trade-off. Creativity isn’t about freedom from concrete facts. Rather, fact-finding and deep research are vital stages in the creative process.
  • The lore of pop psychology is that creativity occurs on the right side of the brain. But we now know that if you tried to be creative using only the right side of your brain, it’d be like living with ideas perpetually at the tip of your tongue, just beyond reach.
  • Creativity requires constant shifting, blender pulses of both divergent thinking and convergent thinking, to combine new information with old and forgotten ideas. Highly creative people are very good at marshaling their brains into bilateral mode, and the more creative they are, the more they dual-activate.
  • “Creativity can be taught,” says James C. Kaufman, professor at California State University, San Bernardino. What’s common about successful programs is they alternate maximum divergent thinking with bouts of intense convergent thinking, through several stages. Real improvement doesn’t happen in a weekend workshop. But when applied to the everyday process of work or school, brain function improves.
  • highly creative adults tended to grow up in families embodying opposites. Parents encouraged uniqueness, yet provided stability. They were highly responsive to kids’ needs, yet challenged kids to develop skills. This resulted in a sort of adaptability: in times of anxiousness, clear rules could reduce chaos—yet when kids were bored, they could seek change, too. In the space between anxiety and boredom was where creativity flourished.
  • highly creative adults frequently grew up with hardship. Hardship by itself doesn’t lead to creativity, but it does force kids to become more flexible—and flexibility helps with creativity.
  • In early childhood, distinct types of free play are associated with high creativity. Preschoolers who spend more time in role-play (acting out characters) have higher measures of creativity: voicing someone else’s point of view helps develop their ability to analyze situations from different perspectives. When playing alone, highly creative first graders may act out strong negative emotions: they’ll be angry, hostile, anguished.
  • In middle childhood, kids sometimes create paracosms—fantasies of entire alternative worlds. Kids revisit their paracosms repeatedly, sometimes for months, and even create languages spoken there. This type of play peaks at age 9 or 10, and it’s a very strong sign of future creativity.
  • From fourth grade on, creativity no longer occurs in a vacuum; researching and studying become an integral part of coming up with useful solutions. But this transition isn’t easy. As school stuffs more complex information into their heads, kids get overloaded, and creativity suffers. When creative children have a supportive teacher—someone tolerant of unconventional answers, occasional disruptions, or detours of curiosity—they tend to excel. When they don’t, they tend to underperform and drop out of high school or don’t finish college at high rates.
  • They’re quitting because they’re discouraged and bored, not because they’re dark, depressed, anxious, or neurotic. It’s a myth that creative people have these traits. (Those traits actually shut down creativity; they make people less open to experience and less interested in novelty.) Rather, creative people, for the most part, exhibit active moods and positive affect. They’re not particularly happy—contentment is a kind of complacency creative people rarely have. But they’re engaged, motivated, and open to the world.
  • A similar study of 1,500 middle schoolers found that those high in creative self-efficacy had more confidence about their future and ability to succeed. They were sure that their ability to come up with alternatives would aid them, no matter what problems would arise.
  •  
    The Creativity Crisis For the first time, research shows that American creativity is declining. What went wrong-and how we can fix it.
Weiye Loh

New voting methods and fair elections : The New Yorker - 0 views

  • history of voting math comes mainly in two chunks: the period of the French Revolution, when some members of France’s Academy of Sciences tried to deduce a rational way of conducting elections, and the nineteen-fifties onward, when economists and game theorists set out to show that this was impossible
  • The first mathematical account of vote-splitting was given by Jean-Charles de Borda, a French mathematician and a naval hero of the American Revolutionary War. Borda concocted examples in which one knows the order in which each voter would rank the candidates in an election, and then showed how easily the will of the majority could be frustrated in an ordinary vote. Borda’s main suggestion was to require voters to rank candidates, rather than just choose one favorite, so that a winner could be calculated by counting points awarded according to the rankings. The key idea was to find a way of taking lower preferences, as well as first preferences, into account.Unfortunately, this method may fail to elect the majority’s favorite—it could, in theory, elect someone who was nobody’s favorite. It is also easy to manipulate by strategic voting.
  • If the candidate who is your second preference is a strong challenger to your first preference, you may be able to help your favorite by putting the challenger last. Borda’s response was to say that his system was intended only for honest men.
  • ...15 more annotations...
  • After the Academy dropped Borda’s method, it plumped for a simple suggestion by the astronomer and mathematician Pierre-Simon Laplace, who was an important contributor to the theory of probability. Laplace’s rule insisted on an over-all majority: at least half the votes plus one. If no candidate achieved this, nobody was elected to the Academy.
  • Another early advocate of proportional representation was John Stuart Mill, who, in 1861, wrote about the critical distinction between “government of the whole people by the whole people, equally represented,” which was the ideal, and “government of the whole people by a mere majority of the people exclusively represented,” which is what winner-takes-all elections produce. (The minority that Mill was most concerned to protect was the “superior intellects and characters,” who he feared would be swamped as more citizens got the vote.)
  • The key to proportional representation is to enlarge constituencies so that more than one winner is elected in each, and then try to align the share of seats won by a party with the share of votes it receives. These days, a few small countries, including Israel and the Netherlands, treat their entire populations as single constituencies, and thereby get almost perfectly proportional representation. Some places require a party to cross a certain threshold of votes before it gets any seats, in order to filter out extremists.
  • The main criticisms of proportional representation are that it can lead to unstable coalition governments, because more parties are successful in elections, and that it can weaken the local ties between electors and their representatives. Conveniently for its critics, and for its defenders, there are so many flavors of proportional representation around the globe that you can usually find an example of whatever point you want to make. Still, more than three-quarters of the world’s rich countries seem to manage with such schemes.
  • The alternative voting method that will be put to a referendum in Britain is not proportional representation: it would elect a single winner in each constituency, and thus steer clear of what foreigners put up with. Known in the United States as instant-runoff voting, the method was developed around 1870 by William Ware
  • In instant-runoff elections, voters rank all or some of the candidates in order of preference, and votes may be transferred between candidates. The idea is that your vote may count even if your favorite loses. If any candidate gets more than half of all the first-preference votes, he or she wins, and the game is over. But, if there is no majority winner, the candidate with the fewest first-preference votes is eliminated. Then the second-preference votes of his or her supporters are distributed to the other candidates. If there is still nobody with more than half the votes, another candidate is eliminated, and the process is repeated until either someone has a majority or there are only two candidates left, in which case the one with the most votes wins. Third, fourth, and lower preferences will be redistributed if a voter’s higher preferences have already been transferred to candidates who were eliminated earlier.
  • At first glance, this is an appealing approach: it is guaranteed to produce a clear winner, and more voters will have a say in the election’s outcome. Look more closely, though, and you start to see how peculiar the logic behind it is. Although more people’s votes contribute to the result, they do so in strange ways. Some people’s second, third, or even lower preferences count for as much as other people’s first preferences. If you back the loser of the first tally, then in the subsequent tallies your second (and maybe lower) preferences will be added to that candidate’s first preferences. The winner’s pile of votes may well be a jumble of first, second, and third preferences.
  • Such transferrable-vote elections can behave in topsy-turvy ways: they are what mathematicians call “non-monotonic,” which means that something can go up when it should go down, or vice versa. Whether a candidate who gets through the first round of counting will ultimately be elected may depend on which of his rivals he has to face in subsequent rounds, and some votes for a weaker challenger may do a candidate more good than a vote for that candidate himself. In short, a candidate may lose if certain voters back him, and would have won if they hadn’t. Supporters of instant-runoff voting say that the problem is much too rare to worry about in real elections, but recent work by Robert Norman, a mathematician at Dartmouth, suggests otherwise. By Norman’s calculations, it would happen in one in five close contests among three candidates who each have between twenty-five and forty per cent of first-preference votes. With larger numbers of candidates, it would happen even more often. It’s rarely possible to tell whether past instant-runoff elections have gone topsy-turvy in this way, because full ballot data aren’t usually published. But, in Burlington’s 2006 and 2009 mayoral elections, the data were published, and the 2009 election did go topsy-turvy.
  • Kenneth Arrow, an economist at Stanford, examined a set of requirements that you’d think any reasonable voting system could satisfy, and proved that nothing can meet them all when there are more than two candidates. So designing elections is always a matter of choosing a lesser evil. When the Royal Swedish Academy of Sciences awarded Arrow a Nobel Prize, in 1972, it called his result “a rather discouraging one, as regards the dream of a perfect democracy.” Szpiro goes so far as to write that “the democratic world would never be the same again,
  • There is something of a loophole in Arrow’s demonstration. His proof applies only when voters rank candidates; it would not apply if, instead, they rated candidates by giving them grades. First-past-the-post voting is, in effect, a crude ranking method in which voters put one candidate in first place and everyone else last. Similarly, in the standard forms of proportional representation voters rank one party or group of candidates first, and all other parties and candidates last. With rating methods, on the other hand, voters would give all or some candidates a score, to say how much they like them. They would not have to say which is their favorite—though they could in effect do so, by giving only him or her their highest score—and they would not have to decide on an order of preference for the other candidates.
  • One such method is widely used on the Internet—to rate restaurants, movies, books, or other people’s comments or reviews, for example. You give numbers of stars or points to mark how much you like something. To convert this into an election method, count each candidate’s stars or points, and the winner is the one with the highest average score (or the highest total score, if voters are allowed to leave some candidates unrated). This is known as range voting, and it goes back to an idea considered by Laplace at the start of the nineteenth century. It also resembles ancient forms of acclamation in Sparta. The more you like something, the louder you bash your shield with your spear, and the biggest noise wins. A recent variant, developed by two mathematicians in Paris, Michel Balinski and Rida Laraki, uses familiar language rather than numbers for its rating scale. Voters are asked to grade each candidate as, for example, “Excellent,” “Very Good,” “Good,” “Insufficient,” or “Bad.” Judging politicians thus becomes like judging wines, except that you can drive afterward.
  • Range and approval voting deal neatly with the problem of vote-splitting: if a voter likes Nader best, and would rather have Gore than Bush, he or she can approve Nader and Gore but not Bush. Above all, their advocates say, both schemes give voters more options, and would elect the candidate with the most over-all support, rather than the one preferred by the largest minority. Both can be modified to deliver forms of proportional representation.
  • Whether such ideas can work depends on how people use them. If enough people are carelessly generous with their approval votes, for example, there could be some nasty surprises. In an unlikely set of circumstances, the candidate who is the favorite of more than half the voters could lose. Parties in an approval election might spend less time attacking their opponents, in order to pick up positive ratings from rivals’ supporters, and critics worry that it would favor bland politicians who don’t stand for anything much. Defenders insist that such a strategy would backfire in subsequent elections, if not before, and the case of Ronald Reagan suggests that broad appeal and strong views aren’t mutually exclusive.
  • Why are the effects of an unfamiliar electoral system so hard to puzzle out in advance? One reason is that political parties will change their campaign strategies, and voters the way they vote, to adapt to the new rules, and such variables put us in the realm of behavior and culture. Meanwhile, the technical debate about electoral systems generally takes place in a vacuum from which voters’ capriciousness and local circumstances have been pumped out. Although almost any alternative voting scheme now on offer is likely to be better than first past the post, it’s unrealistic to think that one voting method would work equally well for, say, the legislature of a young African republic, the Presidency of an island in Oceania, the school board of a New England town, and the assembly of a country still scarred by civil war. If winner takes all is a poor electoral system, one size fits all is a poor way to pick its replacements.
  • Mathematics can suggest what approaches are worth trying, but it can’t reveal what will suit a particular place, and best deliver what we want from a democratic voting system: to create a government that feels legitimate to people—to reconcile people to being governed, and give them reason to feel that, win or lose (especially lose), the game is fair.
  •  
    WIN OR LOSE No voting system is flawless. But some are less democratic than others. by Anthony Gottlieb
Weiye Loh

Skepticblog » The Reasonableness of Weird Things - 0 views

  • people have been talking about Phil Plait’s powerful talk, now known to the blogosphere as the “Don’t be a dick” speech (after Wheaton’s Law, an internet maxim that provided the theme of Phil’s presentation). In his talk, Phil argued that skeptics who have outreach goals should get serious about communication: In times of war, we need warriors. But this isn’t a war. You might try to say it is, but it’s not a war. We aren’t trying to kill an enemy. We’re trying to persuade other humans. And at times like that, we don’t need warriors. What we need are diplomats.
  • there many excellent reasons to tend toward treating people with respect and courtesy. It’s morally bad to be cruel (and usually unnecessary); it’s contrary to scientific and journalistic ethics (and the search for truth) to shout down legitimate alternate views; it blinds us to flaws in our own reasoning if we fail to seriously consider viewpoints we don’t like. Most importantly (this was the theme of Phil’s talk) science communication is more effective when it starts with warmth and respect.
  • a few skeptics are tempted to think there must be something special about those who don’t believe. That conceit hardly seems worthy of dwelling upon, and yet people have actually tried to convince me on this basis that it’s not worth teaching critical thinking. “The smart people already get it,” I’ve been told, “and the stupid people never will. Don’t waste your time.” I suppose it’s human to want to draw these lines through the world: on this side, the good smart people; on the other side, the bad dumb people. But the world is not nearly so simple.
  • ...6 more annotations...
  • One of the interesting things Phil Plait did during his challenging TAM8 speech was to ask the 1300 skeptics in the room this question: How many of you here today used to believe in something — used to, past tense — whether it was flying saucers, psychic powers, religion, anything like that? You can raise your hand if you want to.
  • most pseudoscientific beliefs are not stupid. They’re just wrong.
  • the top reasons people believe weird things are not only understandable, but identical to the reasons most skeptics believe things: they are persuaded by personal experiences (or by the experiences of a loved one); or, they are persuaded by the sources they have consulted.
  • reasoning from visceral experience is a recipe for false belief.
  • I’m not suggesting that personal experience is an adequate basis for accepting paranormal claims (it isn’t) or that these claims are true (so far as science can tell, they’re not). I’m saying that, given their information and tools, many paranormalists have understandable reasons for belief.
  • However we label ourselves or others, we come up against the fact that people are complicated. Generalizations are doomed to inadequacy. But, I will suggest that the differences between skeptics and paranormal believers have less to do with innate credulity, and more to do with training and resources.
  •  
    THE REASONABLENESS OF WEIRD THINGS by DANIEL LOXTON, Jul 26 2010
Weiye Loh

The Fake Scandal of Climategate - 0 views

  • The most comprehensive inquiry was the Independent Climate Change Email Review led by Sir Muir Russell, commissioned by UEA to examine the behaviour of the CRU scientists (but not the scientific validity of their work). It published its final report in July 2010
  • It focused on what the CRU scientists did, not what they said, investigating the evidence for and against each allegation. It interviewed CRU and UEA staff, and took 111 submissions including one from CRU itself. And it also did something the media completely failed to do: it attempted to put the actions of CRU scientists into context.
    • Weiye Loh
       
      Data, in the form of email correspondence, requires context to be interpreted "objectively" and "accurately" =)
  • The Review went back to primary sources to see if CRU really was hiding or falsifying their data. It considered how much CRU’s actions influenced the IPCC’s conclusions about temperatures during the past millennium. It commissioned a paper by Dr Richard Horton, editor of The Lancet, on the context of scientific peer review. And it asked IPCC Review Editors how much influence individuals could wield on writing groups.
  • ...16 more annotations...
  • Many of these are things any journalist could have done relatively easily, but few ever bothered to do.
  • the emergence of the blogosphere requires significantly more openness from scientists. However, providing the details necessary to validate large datasets can be difficult and time-consuming, and how FoI laws apply to research is still an evolving area. Meanwhile, the public needs to understand that science cannot and does not produce absolutely precise answers. Though the uncertainties may become smaller and better constrained over time, uncertainty in science is a fact of life which policymakers have to deal with. The chapter concludes: “the Review would urge all scientists to learn to communicate their work in ways that the public can access and understand”.
  • email is less formal than other forms of communication: “Extreme forms of language are frequently applied to quite normal situations by people who would never use it in other communication channels.” The CRU scientists assumed their emails to be private, so they used “slang, jargon and acronyms” which would have been more fully explained had they been talking to the public. And although some emails suggest CRU went out of their way to make life difficult for their critics, there are others which suggest they were bending over backwards to be honest. Therefore the Review found “the e-mails cannot always be relied upon as evidence of what actually occurred, nor indicative of actual behaviour that is extreme, exceptional or unprofessional.” [section 4.3]
  • when put into the proper context, what do these emails actually reveal about the behaviour of the CRU scientists? The report concluded (its emphasis):
  • we find that their rigour and honesty as scientists are not in doubt.
  • we did not find any evidence of behaviour that might undermine the conclusions of the IPCC assessments.
  • “But we do find that there has been a consistent pattern of failing to display the proper degree of openness, both on the part of the CRU scientists and on the part of the UEA, who failed to recognize not only the significance of statutory requirements but also the risk to the reputation of the University and indeed, to the credibility of UK climate science.” [1.3]
  • The argument that Climategate reveals an international climate science conspiracy is not really a very skeptical one. Sure, it is skeptical in the weak sense of questioning authority, but it stops there. Unlike true skepticism, it doesn’t go on to objectively examine all the evidence and draw a conclusion based on that evidence. Instead, it cherry-picks suggestive emails, seeing everything as incontrovertible evidence of a conspiracy, and concludes all of mainstream climate science is guilty by association. This is not skepticism; this is conspiracy theory.
    • Weiye Loh
       
      How then do we know that we have examined ALL the evidence? What about the context of evidence then? 
  • The media dropped the ball There is a famous quotation attributed to Mark Twain: “A lie can travel halfway around the world while the truth is putting on its shoes.” This is more true in the internet age than it was when Mark Twain was alive. Unfortunately, it took months for the Climategate inquiries to put on their shoes, and by the time they reported, the damage had already been done. The media acted as an uncritical loudspeaker for the initial allegations, which will now continue to circulate around the world forever, then failed to give anywhere near the same amount of coverage to the inquiries clearing the scientists involved. For instance, Rupert Murdoch’s The Australian published no less than 85 stories about Climategate, but not one about the Muir Russell inquiry.
  • Even the Guardian, who have a relatively good track record on environmental reporting and were quick to criticize the worst excesses of climate conspiracy theorists, could not resist the lure of stolen emails. As George Monbiot writes, journalists see FoI requests and email hacking as a way of keeping people accountable, rather than the distraction from actual science which they are to scientists. In contrast, CRU director Phil Jones says: “I wish people would spend as much time reading my scientific papers as they do reading my e-mails.”
  • This is part of a broader problem with climate change reporting: the media holds scientists to far higher standards than it does contrarians. Climate scientists have to be right 100% of the time, but contrarians apparently can get away with being wrong nearly 100% of the time. The tiniest errors of climate scientists are nitpicked and blown out of all proportion, but contrarians get away with monstrous distortions and cherry-picking of evidence. Around the same time The Australian was bashing climate scientists, the same newspaper had no problem publishing Viscount Monckton’s blatant misrepresentations of IPCC projections (not to mention his demonstrably false conspiracy theory that the Copenhagen summit was a plot to establish a world government).
  • In the current model of environmental reporting, the contrarians do not lose anything by making baseless accusations. In fact, it is in their interests to throw as much mud at scientists as possible to increase the chance that some of it will stick in the public consciousness. But there is untold damage to the reputation of the scientists against whom the accusations are being made. We can only hope that in future the media will be less quick to jump to conclusions. If only editors and producers would stop and think for a moment about what they’re doing: they are playing with the future of the planet.
  • As worthy as this defense is, surely this is the kind of political bun-fight SkS has resolutely stayed away from since its inception. The debate can only become a quagmire of competing claims, because this is part of an adversarial process that does not depend on, or even require, scientific evidence. Only by sticking resolutely to the science and the advocacy of the scientific method can SkS continue to avoid being drowned in the kind of mud through which we are obliged to wade elsewhere.
  • I disagree with gp. It is past time we all got angry, very angry, at what these people have done and continue to do. Dispassionate science doesn't cut it with the denial industry or with the media (and that "or" really isn't there). It's time to fight back with everything we can throw back at them.
  • The fact that three quick fire threads have been run on Climatgate on this excellent blog in the last few days is an indication that Climategate (fairly or not) has does serious damage to the cause of AGW activism. Mass media always overshoots and exaggerates. The AGW alarmists had a very good run - here in Australia protagonists like Tim Flannery and our living science legend Robin Williams were talking catastrophe - the 10 year drought was definitely permanent climate change - rivers might never run again - Robin (100 metre sea level rise) Williams refused to even read the Climategate emails. Climategate swung the pendumum to the other extreme - the scientists (nearly all funded by you and me) were under the pump. Their socks rubbed harder on their sandals as they scrambled for clear air. Cries about criminal hackers funded by big oil, tobacco, rightist conspirators etc were heard. Panchuri cried 'voodoo science' as he denied ever knowing about objections to the preposterous 2035 claim. How things change in a year. The drought is broken over most of Australia - Tim Flannery has gone quiet and Robin Williams is airing a science journo who says that AGW scares have been exaggerated. Some balance might have been restored as the pendulum swung, and our hard working misunderstood scientist bretheren will take more care with their emails in future.
  • "Perhaps a more precise description would be that a common pattern in global warming skeptic arguments is to focus on narrow pieces of evidence while ignoring other evidence that contradicts their argument." And this is the issue the article discuss, but in my opinion this article is in guilt of this as well. It focus on a narrow set of non representative claims, claims which is indeed pure propaganda by some skeptics, however the article also suggest guilt buy association and as such these propaganda claims then gets attributed to the be opinions of the entire skeptic camp. In doing so, the OP becomes guilty of the very same issue the OP tries to address. In other words, the issue I try to raise is not about the exact numbers or figures or any particular facts but the fact that the claim I quoted is obvious nonsense. It is nonsense because it a sweeping statement with no specifics and as such it is an empty statement and means nothing. A second point I been thinking about when reading this article is why should scientist be granted immunity to dirty tricks/propaganda in a political debate? Is it because they speak under the name of science? If that is the case, why shall we not grant the same right to other spokesmen for other organization?
    • Weiye Loh
       
      The aspiration to examine ALL evidence is again called into question here. Is it really possible to examine ALL evidence? Even if we have examined them, can we fully represent our examination? From our lab, to the manuscript, to the journal paper, to the news article, to 140characters tweets?
Weiye Loh

Rationally Speaking: On Utilitarianism and Consequentialism - 0 views

  • Utilitarianism and consequentialism are different, yet closely related philosophical positions. Utilitarians are usually consequentialists, and the two views mesh in many areas, but each rests on a different claim
  • Utilitarianism's starting point is that we all attempt to seek happiness and avoid pain, and therefore our moral focus ought to center on maximizing happiness (or, human flourishing generally) and minimizing pain for the greatest number of people. This is both about what our goals should be and how to achieve them.
  • Consequentialism asserts that determining the greatest good for the greatest number of people (the utilitarian goal) is a matter of measuring outcome, and so decisions about what is moral should depend on the potential or realized costs and benefits of a moral belief or action.
  • ...17 more annotations...
  • first question we can reasonably ask is whether all moral systems are indeed focused on benefiting human happiness and decreasing pain.
  • Jeremy Bentham, the founder of utilitarianism, wrote the following in his Introduction to the Principles of Morals and Legislation: “When a man attempts to combat the principle of utility, it is with reasons drawn, without his being aware of it, from that very principle itself.”
  • Michael Sandel discusses this line of thought in his excellent book, Justice: What’s the Right Thing to Do?, and sums up Bentham’s argument as such: “All moral quarrels, properly understood, are [for Bentham] disagreements about how to apply the utilitarian principle of maximizing pleasure and minimizing pain, not about the principle itself.”
  • But Bentham’s definition of utilitarianism is perhaps too broad: are fundamentalist Christians or Muslims really utilitarians, just with different ideas about how to facilitate human flourishing?
  • one wonders whether this makes the word so all-encompassing in meaning as to render it useless.
  • Yet, even if pain and happiness are the objects of moral concern, so what? As philosopher Simon Blackburn recently pointed out, “Every moral philosopher knows that moral philosophy is functionally about reducing suffering and increasing human flourishing.” But is that the central and sole focus of all moral philosophies? Don’t moral systems vary in their core focuses?
  • Consider the observation that religious belief makes humans happier, on average
  • Secularists would rightly resist the idea that religious belief is moral if it makes people happier. They would reject the very idea because deep down, they value truth – a value that is non-negotiable.Utilitarians would assert that truth is just another utility, for people can only value truth if they take it to be beneficial to human happiness and flourishing.
  • . We might all agree that morality is “functionally about reducing suffering and increasing human flourishing,” as Blackburn says, but how do we achieve that? Consequentialism posits that we can get there by weighing the consequences of beliefs and actions as they relate to human happiness and pain. Sam Harris recently wrote: “It is true that many people believe that ‘there are non-consequentialist ways of approaching morality,’ but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality.”
  • we might wonder about the elasticity of words, in this case consequentialism. Do fundamentalist Christians and Muslims count as consequentialists? Is consequentialism so empty of content that to be a consequentialist one need only think he or she is benefiting humanity in some way?
  • Harris’ argument is that one cannot adhere to a certain conception of morality without believing it is beneficial to society
  • This still seems somewhat obvious to me as a general statement about morality, but is it really the point of consequentialism? Not really. Consequentialism is much more focused than that. Consider the issue of corporal punishment in schools. Harris has stated that we would be forced to admit that corporal punishment is moral if studies showed that “subjecting children to ‘pain, violence, and public humiliation’ leads to ‘healthy emotional development and good behavior’ (i.e., it conduces to their general well-being and to the well-being of society). If it did, well then yes, I would admit that it was moral. In fact, it would appear moral to more or less everyone.” Harris is being rhetorical – he does not believe corporal punishment is moral – but the point stands.
  • An immediate pitfall of this approach is that it does not qualify corporal punishment as the best way to raise emotionally healthy children who behave well.
  • The virtue ethicists inside us would argue that we ought not to foster a society in which people beat and humiliate children, never mind the consequences. There is also a reasonable and powerful argument based on personal freedom. Don’t children have the right to be free from violence in the public classroom? Don’t children have the right not to suffer intentional harm without consent? Isn’t that part of their “moral well-being”?
  • If consequences were really at the heart of all our moral deliberations, we might live in a very different society.
  • what if economies based on slavery lead to an increase in general happiness and flourishing for their respective societies? Would we admit slavery was moral? I hope not, because we value certain ideas about human rights and freedom. Or, what if the death penalty truly deterred crime? And what if we knew everyone we killed was guilty as charged, meaning no need for The Innocence Project? I would still object, on the grounds that it is morally wrong for us to kill people, even if they have committed the crime of which they are accused. Certain things hold, no matter the consequences.
  • We all do care about increasing human happiness and flourishing, and decreasing pain and suffering, and we all do care about the consequences of our beliefs and actions. But we focus on those criteria to differing degrees, and we have differing conceptions of how to achieve the respective goals – making us perhaps utilitarians and consequentialists in part, but not in whole.
  •  
    Is everyone a utilitarian and/or consequentialist, whether or not they know it? That is what some people - from Jeremy Bentham and John Stuart Mill to Sam Harris - would have you believe. But there are good reasons to be skeptical of such claims.
Weiye Loh

The Matthew Effect § SEEDMAGAZINE.COM - 0 views

  • For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. —Matthew 25:29
  • Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded
  • Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit. 
  • ...7 more annotations...
  • Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.
  • Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.
  • How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.
  • Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.
  • what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.
  • We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.
  • Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.
  •  
    WHEN IT COMES TO SCIENTIFIC PUBLISHING AND FAME, THE RICH GET RICHER AND THE POOR GET POORER. HOW CAN WE BREAK THIS FEEDBACK LOOP?
Weiye Loh

How to raise an unhappy child « The Berkeley Blog - 0 views

  • Chua argues that “Chinese” mothers “are superior” because they demand absolute perfection—and won’t refrain from berating, threatening, and even starving their kids until they’re satisfied.
  • Chua acknowledges that her argument will offend softy “Western” parents, who prefer to coddle rather than throttle their kids—parents who prioritize happiness over achievement.
  • Though I’m anything but permissive, even by Chua’s standards, I am one of those “Western” parents that absolutely does prioritize children’s long-term happiness over their achievements and performances.  Ironically, I adapted these values from a confluence of Eastern philosophy—particularly Lao-tzu’s Tao Te Ching and Buddhist teachings—and Western science, which provides ample evidence that success follows happiness, and not the other way around.
  • ...4 more annotations...
  • Chua’s argument goes against years of scientific research into what makes kids truly happy—and successful—in life.  Moreover, it rests on a faulty premise: Rather than being overly permissive, many American parents—especially the well-educated, affluent Americans reading excerpts in the WSJ or on Slate.com—are overly focused on achievement already.
  • Chua defines success narrowly, focusing on achievement and perfection at all costs: Success is getting straight As and being a violin or piano prodigy.  Three decades of research clearly suggests that such a narrow focus on achievement can produce wildly unhappy people. Yes, they may boast perfect report cards and stunning piano recitals. But we are a country full of high-achieving but depressed and suicidal college students, a record number of whom take prescription medication for anxiety and depression.
  • Chua argues that happiness comes from mastery, and that mastery is achieved through “tenacious practice, practice, practice.”  She’s right here—practice does fuel success—but she’s wrong that forced mastery will lead to happiness.  “Once a child starts to excel at something,” she writes, “he or she gets praise, admiration and satisfaction. This builds confidence and makes the once not-fun activity fun. This in turn makes it easier for the parent to get the child to work even more.”
  • A country with an economic system that is not adequately flexible to allow its own individual citizens to choose for themselves their own answers to their economic problems and challenges has limited, or restricted, career choices. In such a country “success” is not broadly defined, it is narrowly defined. In other words, authoritarian governments, dictatorships, or whatever you want to call them, have few options for their people to attain “success” other than for their citizens to shoehorn their lives into regimented lifestyles. This should be no surprise to anyone; regimes create regimented lifestyles because those are the only lifestyles that lead to success within those economies.
  •  
    How to raise an unhappy child
Weiye Loh

Roger Pielke Jr.'s Blog: How to Get to 80% "Clean Energy" by 2035 - 0 views

  • I have put together a quick spreadsheet to allow me to do a bit of sensitivity analysis of what it would take for the US to get to 80% "clean energy" in its electricity supply by 2035, as proposed by President Obama in his State of the Union Speech
  • 1. I started with the projections from the EIA to 2035 available here in XLS. 2. I then calculated the share of clean energy in 2011, assuming that natural gas gets a 50% credit for being clean.  That share is just under 44% (Nukes 21%, Renewable 13%, Gas 10%). 3. I then calculated how that share could be increased to 80% by 2035.
  • Here is what I found: 1. Coal pretty much has to go away.  Specifically, about 90% or more of coal energy would have to be replaced. 2. I first looked at replacing all the coal with gas, all else equal.  That gets the share of clean energy up to about 68%, a ways off of the target. 3. I then fiddled with the numbers to arrive at 80%.  One way to get there would be to increase the share of nukes to 43%, gas to 31% and renewables to 22% (Note that the EIA reference scenario -- BAU -- to 2035 has these shares at 17%, 21% and 17% respectively, for a share of 45% just about like today.)
  • ...2 more annotations...
  • Increasing nuclear power in the EIA reference scenario from a 17% to 43% share of electricity implies, in round numbers, about 300 new nuclear power plants by 2035.***  If you do not like nuclear you can substitute wind turbines or solar thermal plants (or even reductions in electricity consumption) according to the data provided in The Climate Fix, Table 4.4.  The magnitude of the task is the same size, just expressed differently.
  • One nuclear plant worth of carbon-free energy every 30 days between now and 2035.  This does not even consider electrification of some fraction of the vehicle fleet -- another of President Obama's goals -- which presumably would add a not-insignificant amount to electricity demand. Thus, I'd suggest that the President's clean energy goal is much more of the aspirational variety than a actual policy target expected to be hit precisely.
Weiye Loh

Rationally Speaking: Response to Jonathan Haidt's response, on the academy's liberal bias - 0 views

  • Dear Prof. Haidt,You understandably got upset by my harsh criticism of your recent claims about the mechanisms behind the alleged anti-conservative bias that apparently so permeates the modern academy. I find it amusing that you simply assumed I had not looked at your talk and was therefore speaking without reason. Yet, I have indeed looked at it (it is currently published at Edge, a non-peer reviewed webzine), and found that it simply doesn’t add much to the substance (such as it is) of Tierney’s summary.
  • Yes, you do acknowledge that there may be multiple reasons for the imbalance between the number of conservative and liberal leaning academics, but then you go on to characterize the academy, at least in your field, as a tribe having a serious identity issue, with no data whatsoever to back up your preferred subset of causal explanations for the purported problem.
  • your talk is simply an extended op-ed piece, which starts out with a summary of your findings about the different moral outlooks of conservatives and liberals (which I have criticized elsewhere on this blog), and then proceeds to build a flimsy case based on a couple of anecdotes and some badly flawed data.
  • ...4 more annotations...
  • For instance, slide 23 shows a Google search for “liberal social psychologist,” highlighting the fact that one gets a whopping 2,740 results (which, actually, by Google standards is puny; a search under my own name yields 145,000, and I ain’t no Lady Gaga). You then compared this search to one for “conservative social psychologist” and get only three entries.
  • First of all, if Google searches are the main tool of social psychology these days, I fear for the entire field. Second, I actually re-did your searches — at the prompting of one of my readers — and came up with quite different results. As the photo here shows, if you actually bother to scroll through the initial Google search for “liberal social psychologist” you will find that there are in fact only 24 results, to be compared to 10 (not 3) if you search for “conservative social psychologist.” Oops. From this scant data I would simply conclude that political orientation isn’t a big deal in social psychology.
  • Your talk continues with some pretty vigorous hand-waving: “We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values.” Right, except that I would like to see a systematic survey of exactly how the lack of conservative peer review has affected the quality of academic publications. Oh, wait, it hasn’t, at least according to what you yourself say in the next sentence: “The great majority of work in social psychology is excellent, and is unaffected by these problems.” I wonder how you know this, and why — if true — you then think that there is a problem. Philosophers call this an inherent contradiction, it’s a common example of bad argument.
  • Finally, let me get to your outrage at the fact that I have allegedly accused you of academic misconduct and lying. I have done no such thing, and you really ought (in the ethical sense) to be careful when throwing those words around. I have simply raised the logical possibility that you (and Tierney) have an agenda, a possibility based on reading several of the things both you and Tierney have written of late. As a psychologist, I’m sure you are aware that biases can be unconscious, and therefore need not imply that the person in question is lying or engaging in any form of purposeful misconduct. Or were you implying in your own talk that your colleagues’ bias was conscious? Because if so, you have just accused an entire profession of misconduct.
Weiye Loh

Roger Pielke Jr.'s Blog: Full Comments to the Guardian - 0 views

  • The Guardian has an good article today on a threatened libel suit under UK law against Gavin Schmidt, a NASA researcher who blogs at Real Climate, by the publishers of the journal Energy and Environment. 
  • Here are my full comments to the reporter for the Guardian, who was following up on Gavin's reference to comments I had made a while back about my experiences with E&E:
  • In 2000, we published a really excellent paper (in my opinion) in E&E in that has stood the test of time: Pielke, Jr., R. A., R. Klein, and D. Sarewitz (2000), Turning the big knob: An evaluation of the use of energy policy to modulate future climate impacts. Energy and Environment 2:255-276. http://sciencepolicy.colorado.edu/admin/publication_files/resource-250-2000.07.pdf You'll see that paper was in only the second year of the journal, and we were obviously invited to submit a year or so before that. It was our expectation at the time that the journal would soon be ISI listed and it would become like any other academic journal. So why not publish in E&E?
  • ...5 more annotations...
  • That paper, like a lot of research, required a lot of effort.  So it was very disappointing to E&E in the years that followed identify itself as an outlet for alternative perspectives on the climate issue. It has published a number of low-quality papers and a high number of opinion pieces, and as far as I know it never did get ISI listed.
  • Boehmer-Christiansen's quote about following her political agenda in running the journal is one that I also have cited on numerous occasions as an example of the pathological politicization of science. In this case the editor's political agenda has clearly undermined the legitimacy of the outlet.  So if I had a time machine I'd go back and submit our paper elsewhere!
  • A consequence of the politicization of E&E is that any paper published there is subsequently ignored by the broader scientific community. In some cases perhaps that is justified, but I would argue that it provided a convenient excuse to ignore our paper on that basis alone, and not on the merits of its analysis. So the politicization of E&E enables a like response from its critics, which many have taken full advantage of. For outside observers of climate science this action and response together give the impression that scientific studies can be evaluated simply according to non-scientific criteria, which ironically undermines all of science, not just E&E.  The politicization of the peer review process is problematic regardless of who is doing the politicization because it more readily allows for political judgments to substitute for judgments of the scientific merit of specific arguments.  An irony here of course is that the East Anglia emails revealed a desire to (and some would say success in) politicize the peer review process, which I discuss in The Climate Fix.
  • For my part, in 2007 I published a follow on paper to the 2000 E&E paper that applied and extended a similar methodology.  This paper passed peer review in the Philosophical Transactions of the Royal Society: Pielke, Jr., R. A. (2007), Future economic damage from tropical cyclones: sensitivities to societal and climate changes. Philosophical Transactions of the Royal Society A 365 (1860) 2717-2729 http://sciencepolicy.colorado.edu/admin/publication_files/resource-2517-2007.14.pdf
  • Over the long run I am confident that good ideas will win out over bad ideas, but without care to the legitimacy of our science institutions -- including journals and peer review -- that long run will be a little longer.
1 - 20 of 35 Next ›
Showing 20 items per page