Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged my

Rss Feed Group items tagged

Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

Weiye Loh

The Ashtray: The Ultimatum (Part 1) - NYTimes.com - 0 views

  • “Under no circumstances are you to go to those lectures. Do you hear me?” Kuhn, the head of the Program in the History and Philosophy of Science at Princeton where I was a graduate student, had issued an ultimatum. It concerned the philosopher Saul Kripke’s lectures — later to be called “Naming and Necessity” — which he had originally given at Princeton in 1970 and planned to give again in the Fall, 1972.
  • Whiggishness — in history of science, the tendency to evaluate and interpret past scientific theories not on their own terms, but in the context of current knowledge. The term comes from Herbert Butterfield’s “The Whig Interpretation of History,” written when Butterfield, a future Regius professor of history at Cambridge, was only 31 years old. Butterfield had complained about Whiggishness, describing it as “…the study of the past with direct and perpetual reference to the present” – the tendency to see all history as progressive, and in an extreme form, as an inexorable march to greater liberty and enlightenment. [3] For Butterfield, on the other hand, “…real historical understanding” can be achieved only by “attempting to see life with the eyes of another century than our own.” [4][5].
  • Kuhn had attacked my Whiggish use of the term “displacement current.” [6] I had failed, in his view, to put myself in the mindset of Maxwell’s first attempts at creating a theory of electricity and magnetism. I felt that Kuhn had misinterpreted my paper, and that he — not me — had provided a Whiggish interpretation of Maxwell. I said, “You refuse to look through my telescope.” And he said, “It’s not a telescope, Errol. It’s a kaleidoscope.” (In this respect, he was probably right.) [7].
  • ...9 more annotations...
  • I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ” [8] ¶He started moaning. He put his head in his hands and was muttering, “He’s trying to kill me. He’s trying to kill me.” ¶And then I added, “…except for someone who imagines himself to be God.” ¶It was at this point that Kuhn threw the ashtray at me.
  • I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other.
  • That's the problem with relativism: Who's to say who's right and who's wrong? Somehow I'm not surprised to hear Kuhn was an ashtray-hurler. In the end, what other argument could he make?
  • For us to have a conversation and come to an agreement about the meaning of some word without having to refer to some outside authority like a dictionary, we would of necessity have to be satisfied that our agreement was genuine and not just a polite acknowledgement of each others' right to their opinion, can you agree with that? If so, then let's see if we can agree on the meaning of the word 'know' because that may be the crux of the matter. When I use the word 'know' I mean more than the capacity to apprehend some aspect of the world through language or some other represenational symbolism. Included in the word 'know' is the direct sensorial perception of some aspect of the world. For example, I sense the floor that my feet are now resting upon. I 'know' the floor is really there, I can sense it. Perhaps I don't 'know' what the floor is made of, who put it there, and other incidental facts one could know through the usual symbolism such as language as in a story someone tells me. Nevertheless, the reality I need to 'know' is that the floor, or whatever you may wish to call the solid - relative to my body - flat and level surface supported by more structure then the earth, is really there and reliably capable of supporting me. This is true and useful knowledge that goes directly from the floor itself to my knowing about it - via sensation - that has nothing to do with my interpretive system.
  • Now I am interested in 'knowing' my feet in the same way that my feet and the whole body they are connected to 'know' the floor. I sense my feet sensing the floor. My feet are as real as the floor and I know they are there, sensing the floor because I can sense them. Furthermore, now I 'know' that it is 'I' sensing my feet, sensing the floor. Do you see where I am going with this line of thought? I am including in the word 'know' more meaning than it is commonly given by everyday language. Perhaps it sounds as if I want to expand on the Cartesian formula of cogito ergo sum, and in truth I prefer to say I sense therefore I am. It is my sensations of the world first and foremost that my awareness, such as it is, is actively engaged with reality. Now, any healthy normal animal senses the world but we can't 'know' if they experience reality as we do since we can't have a conversation with them to arrive at agreement. But we humans can have this conversation and possibly agree that we can 'know' the world through sensation. We can even know what is 'I' through sensation. In fact, there is no other way to know 'I' except through sensation. Thought is symbolic representation, not direct sensing, so even though the thoughtful modality of regarding the world may be a far more reliable modality than sensation in predicting what might happen next, its very capacity for such accurate prediction is its biggest weakness, which is its capacity for error
  • Sensation cannot be 'wrong' unless it is used to predict outcomes. Thought can be wrong for both predicting outcomes and for 'knowing' reality. Sensation alone can 'know' reality even though it is relatively unreliable, useless even, for making predictions.
  • If we prioritize our interests by placing predictability over pure knowing through sensation, then of course we will not value the 'knowledge' to be gained through sensation. But if we can switch the priorities - out of sheer curiosity perhaps - then we can enter a realm of knowledge through sensation that is unbelievably spectacular. Our bodies are 'made of' reality, and by methodically exercising our nascent capacity for self sensing, we can connect our knowing 'I' to reality directly. We will not be able to 'know' what it is that we are experiencing in the way we might wish, which is to be able to predict what will happen next or to represent to ourselves symbolically what we might experience when we turn our attention to that sensation. But we can arrive at a depth and breadth of 'knowing' that is utterly unprecedented in our lives by operating that modality.
  • One of the impressions that comes from a sustained practice of self sensing is a clearer feeling for what "I" is and why we have a word for that self referential phenomenon, seemingly located somewhere behind our eyes and between our ears. The thing we call "I" or "me" depending on the context, turns out to be a moving point, a convergence vector for a variety of images, feelings and sensations. It is a reference point into which certain impressions flow and out of which certain impulses to act diverge and which may or may not animate certain muscle groups into action. Following this tricky exercize in attention and sensation, we can quickly see for ourselves that attention is more like a focused beam and awareness is more like a diffuse cloud, but both are composed of energy, and like all energy they vibrate, they oscillate with a certain frequency. That's it for now.
  • I loved the writer's efforts to find a fixed definition of “Incommensurability;” there was of course never a concrete meaning behind the word. Smoke and mirrors.
Weiye Loh

Skepticblog » Further Thoughts on Atheism - 0 views

  • Even before I started writing Evolution: How We and All Living Things Came to Be I knew that it would very briefly mention religion, make a mild assertion that religious questions are out of scope for science, and move on. I knew this was likely to provoke blow-back from some in the atheist community, and I knew mentioning that blow-back in my recent post “The Standard Pablum — Science and Atheism” would generate more.
  • Still, I was surprised by the quantity of the responses to the blog post (208 comments as of this moment, many of them substantial letters), and also by the fierceness of some of those responses. For example, according to one poster, “you not only pandered, you lied. And even if you weren’t lying, you lied.” (Several took up this “lying” theme.) Another, disappointed that my children’s book does not tell a general youth audience to look to “secular humanism for guidance,” declared  that “I’d have to tear out that page if I bought the book.”
  • I don’t mean to suggest that there are not points of legitimate disagreement in the mix — there are, many of them stated powerfully. There are also statements of support, vigorous debate, and (for me at least) a good deal of food for thought. I invite anyone to browse the thread, although I’d urge you to skim some of it. (The internet is after all a hyperbole-generating machine.)
  • ...10 more annotations...
  • I lack any belief in any deity. More than that, I am persuaded (by philosophical argument, not scientific evidence) to a high degree of confidence that gods and an afterlife do not exist.
  • do try to distinguish between my work as a science writer and skeptical activist on the one hand, and my personal opinions about religion and humanism on the other.
  • Atheism is a practical handicap for science outreach. I’m not naive about this, but I’m not cynical either. I’m a writer. I’m in the business of communicating ideas about science, not throwing up roadblocks and distractions. It’s good communication to keep things as clear, focused, and on-topic as possible.
  • Atheism is divisive for the skeptical community, and it distracts us from our core mandate. I was blunt about this in my 2007 essay “Where Do We Go From Here?”, writing, I’m both an atheist and a secular humanist, but it is clear to me that atheism is an albatross for the skeptical movement. It divides us, it distracts us, and it marginalizes us. Frankly, we can’t afford that. We need all the help we can get.
  • In What Do I Do Next? I urged skeptics to remember that there are many other skeptics who do hold or identify with some religion. Indeed, the modern skeptical movement is built partly on the work of people of faith (including giants like Harry Houdini and Martin Gardner). You don’t, after all, have to be against god to be against fraud.
  • In my Skeptical Inquirer article “The Paradoxical Future of Skepticism” I argued that skeptics must set aside the conceit that our goal is a cultural revolution or the dawning of a new Enlightenment. … When we focus on that distant, receding, and perhaps illusory goal, we fail to see the practical good we can do, the harm-reduction opportunities right in front of us. The long view subverts our understanding of the scale and hazard of paranormal beliefs, leading to sentiments that the paranormal is “trivial” or “played out.” By contrast, the immediate, local, human view — the view that asks “Will this help someone?” — sees obvious opportunities for every local group and grassroots skeptic to make a meaningful difference.
  • This practical argument, that skepticism can get more done if we keep our mandate tight and avoid alienating our best friends, seems to me an important one. Even so, it is not my main reason for arguing that atheism and skepticism are different projects.
  • In my opinion, Metaphysics and ethics are out of scope for science — and therefore out of scope for skepticism. This is by far the most important reason I set aside my own atheism when I put on my “skeptic” hat. It’s not that I don’t think atheism is rational — I do. That’s why I’m an atheist. But I know that I cannot claim scientific authority for a conclusion that science cannot test, confirm, or disprove. And so, I restrict myself as much as possible, in my role as a skeptic and science writer, to investigable claims. I’ve become a cheerleader for this “testable claims” criterion (and I’ll discuss it further in future posts) but it’s not a new or radical constriction of the scope of skepticism. It’s the traditional position occupied by skeptical organizations for decades.
  • In much of the commentary, I see an assumption that I must not really believe that testable paranormal and pseudoscientific claims (“I can read minds”) are different in kind from the untestable claims we often find at the core of religion (“god exists”). I acknowledge that many smart people disagree on this point, but I assure you that this is indeed what I think.
  • I’d like to call out one blogger’s response to my “Standard Pablum” post. The author certainly disagrees with me (we’ve discussed the topic often on Twitter), but I thank him for describing my position fairly: From what I’ve read of Daniel’s writings before, this seems to be a very consistent position that he has always maintained, not a new one he adopted for the book release. It appears to me that when Daniel says that science has nothing to say about religion, he really means it. I have nothing to say to that. It also appears to me that when he says skepticism is a “different project than atheism” he also means it.
  •  
    FURTHER THOUGHTS ON ATHEISM by DANIEL LOXTON, Mar 05 2010
Weiye Loh

Do Androids Dream of Origami Unicorns? | Institute For The Future - 0 views

  • rep.licants is the work that I did for my master thesis. During my studies, I developed an interest about the way most of people are using social networks but also the differences in between someone real identity and his digital one.
  • Back to rep.licants - when I began to think about a project for my master thesis, I really wanted to work on those two thematics (mix in between digital and real identity and a kind of study about how users are using social networks). With the aim to raise discussions about those two thematics.
  • the negative responses are mainly from people who were thinking rep.licants is a real and serious webservice which is giving for free performant bots who are able to almost perfectly replicate the user. And if they are expecting that I understand their disappointment because my bot is far from being performant ! Some were negatives because people were thinking it is kind of scary asking a bot to manage your own digital identity so they rejected the idea.
  • ...6 more annotations...
  • For the positive responses it's mainly people who understood that rep.licants is not about giving performant bots but is more like an experiment (and also a kind of critics about how most of the users are using social networks) where users can mix themselves with a bot and see what is happening. Because even if my bots are crap they can be, sometimes, surprising.
  • But I was kind of surprised that so many people would really expect to have a real bot to manage their social networks account. Twitter never responded and Facebook responded by banning, three times already, my Facebook applications which is managing and running all the Facebook's bots.
  • some people use the bot: a. Just as an experiment, they want to see what the bot can do and if the bot can really improve their virtual social influences. Or users experimenting how long they could keep a bot on their account without their friends noticing it's runt by a bot. b. I saw few time inside my database which stores informations about the users that some of them have a twitter name like "renthouseUSA", so I guess they are using rep.licants for getting a presence on social networks without managing anything and as a commercial goal. c. This is a feedback that I had a lot of time and it is the reason why I am using rep.licants on my own twitter account: If you are precise with the keywords that you give to the bot, it will sometimes find very interesting content related to your interest. My bot made me discover a lot of interesting things, by posting them on Twitter, that I wouldn't never find without him. New informations are coming so fast and in so big quantities that it becomes really difficult to deal with that. For example just on Twitter I follow 80 persons (which is not a lot) all of those persons that I follow is because I know that they might tweet interesting stuffs related to my interests. But I have maybe 10 of those 80 followers who are tweeting quiet a lot (maybe 1-2 tweet per hour) and as I check my twitter feed only one time per day I sometimes loose more than one hour to find interesting tweets in the amount of tweets that my 80 persons posted. And this is only for Twitter ! I really think that we need more and more personal robots for filtering information for us. And this is a very positive point I found about having a bot that I could never imagine when I was beginning my project.
  • One surprising bugs was when the Twitter's bots began to speak to themselves. It's maybe boring for some users to see their own account speak to itself one time per day but when I discovered the bug I found it very funny. So I decided to keep that bug !
  • this video of a chatbot having a conversation with itself went viral – perhaps in part because the conversation immediately turned towards more existentialist questions and responses.  The conversation was recorded at the Cornell Creative Machines Lab, where the faculty are researching how to make helper bots. 

     


  • The questions that rep.licants poses are deep human and social ones – laced with uncertainties about the kinds of interactions we count as normal and the responsibilities we owe to ourselves and each other.  Seeing these bots carry out conversations with themselves and with human counterparts (much less other non-human counterparts) allows us to take tradition social and technological research into a different territory – asking not only what it means to be human – but also what it means to be non-human.
Weiye Loh

Libel Chill and Me « Skepticism « Critical Thinking « Skeptic North - 0 views

  • Skeptics may by now be very familiar with recent attempts in Canada to ban wifi from public schools and libraries.  In short: there is no valid scientific reason to be worried about wifi.  It has also been revealed that the chief scientists pushing the wifi bans have been relying on poor data and even poorer studies.  By far the vast majority of scientific data that currently exists supports the conclusion that wifi and cell phone signals are perfectly safe.
  • So I wrote about that particular topic in the summer.  It got some decent coverage, but the fear mongering continued. I wrote another piece after I did a little digging into one of the main players behind this, one Rodney Palmer, and I discovered some decidedly pseudo-scientific tendencies in his past, as well as some undisclosed collusion.
  • One night I came home after a long day at work, a long commute, and a phone call that a beloved family pet was dying, and will soon be in significant pain.  That is the state I was in when I read the news about Palmer and Parliamentary committee.
  • ...18 more annotations...
  • That’s when I wrote my last significant piece for Skeptic North.  Titled, “Rodney Palmer: When Pseudoscience and Narcissism Collide,” it was a fiery take-down of every claim I heard Palmer speak before the committee, as well as reiterating some of his undisclosed collusion, unethical media tactics, and some reasons why he should not be considered an expert.
  • This time, the article got a lot more reader eyeballs than anything I had ever written for this blog (or my own) and it also caught the attention of someone on a school board which was poised to vote on wifi.  In these regards: Mission very accomplished.  I finally thought that I might be able to see some people in the media start to look at Palmer’s claims with a more critical eye than they had been previously, and I was flattered at the mountain of kind words, re-tweets, reddit comments and Facebook “likes.”
  • The comments section was mostly supportive of my article, and they were one of the few things that kept me from hiding in a hole for six weeks.  There were a few comments in opposition to what I wrote, some sensible, most incoherent rambling (one commenter, when asked for evidence, actually linked to a YouTube video which they referred to as “peer reviewed”)
  • One commenter was none other than the titular subject of the post, Rodney Palmer himself.  Here is a screen shot of what he said: Screen shot of the Libel/Slander threat.
  • Knowing full well the story of the libel threat against Simon Singh, I’ve always thought that if ever a threat like that came my way, I’d happily beat it back with the righteous fury and good humour of a person with the facts on their side.  After all, if I’m wrong, you’d be able to prove me wrong, rather than try to shut me up with a threat of a lawsuit.  Indeed, I’ve been through a similar situation once before, so I should be an old hat at this! Let me tell you friends, it’s not that easy.  In fact, it’s awful.  Outside observers could easily identify that Palmer had no case against me, but that was still cold comfort to me.  It is a very stressful situation to find yourself in.
  • The state of libel and slander laws in this country are such that a person can threaten a lawsuit without actually threatening a lawsuit.  There is no need to hire a lawyer to investigate the claims, look into who I am, where I live, where I work, and issue a carefully worded threatening letter demanding compliance.  All a person has to say is some version of  “Libel.  Slander.  Hmmmm….,” and that’s enough to spook a lot of people into backing off. It’s a modern day bogeyman.  They don’t have to prove it.  They don’t have to act on it.  A person or organization just has to say “BOO!” with sufficient seriousness, and unless you’ve got a good deal of editorial and financial support, discussion goes out the window. Libel Chill refers to the ‘chilling effect’ that the possibility of a libel/slander lawsuit has.  If a person is scared they might get sued, then they won’t even comment on a piece at all.  In my case, I had already commented three times on the wifi scaremongering, but this bogus threat against me was surely a major contributing factor to my not commenting again.
  • I ceased to discuss anything in the comment thread of the original article, and even shied away from other comment threads, calling me out.  I learned a great deal about the wifi/EMF issue since I wrote the article, but I did not comment on any of it, because I knew that Palmer and his supporters were watching me like a hawk (sorry to stretch the simile), and would likely try to silence me again.  I couldn’t risk a lawsuit.  Even though I knew there was no case against me, I couldn’t afford a lawyer just to prove that I didn’t do anything illegal.
  • The Libel and Slanders Act of Ontario, 1990 hasn’t really caught up with the internet.  There isn’t a clear precedent that defines a blog post, Twitter feed or Facebook post as falling under the umbrella of “broadcast,” which is what the bill addresses.  If I had written the original article in print, Palmer would have had six weeks to file suit against me.  But the internet is only kind of considered ‘broadcast.’  So it could be just six weeks, but he could also have up to two years to act and get a lawyer after me.  Truth is, there’s not a clear demarcation point for our Canadian legal system.
  • Libel laws in Canada are somewhere in between the Plaintiff-favoured UK system, and the Defendant-favoured US system.  On the one hand, if Palmer chose to incur the expense and time to hire a lawyer and file suit against me, the burden of proof would be on me to prove that I did not act with malice.  Easy peasy.  On the other hand, I would have a strong case that I acted in the best interests of Canadians, which would fall under the recent Supreme Court of Canada decision on protecting what has been termed, “Responsible Communication.”  The Supreme Court of Canada decision does not grant bloggers immunity from libel and slander suits, but it is a healthy dose of welcome freedom to discuss issues of importance to Canadians.
  • Palmer himself did not specify anything against me in his threat.  There was nothing particular that he complained about, he just said a version of “Libel and Slander!” at me.  He may as well have said “Boo!”
  • This is not a DBAD discussion (although I wholeheartedly agree with Phil Plait there). 
  • If you’d like to boil my lessons down to an acronym, I suppose the best one would be DBRBC: Don’t be reckless. Be Careful.
  • I wrote a piece that, although it was not incorrect in any measurable way, was written with fire and brimstone, piss and vinegar.  I stand by my piece, but I caution others to be a little more careful with the language they use.  Not because I think it is any less or more tactically advantageous (because I’m not sure anyone can conclusively demonstrate that being an aggressive jerk is an inherently better or worse communication tool), but because the risks aren’t always worth it.
  • I’m not saying don’t go after a person.  There are egomaniacs out there who deserve to be called out and taken down (verbally, of course).  But be very careful with what you say.
  • ask yourself some questions first: 1) What goal(s) are you trying to accomplish with this piece? Are you trying to convince people that there is a scientific misunderstanding here?  Are you trying to attract the attention of the mainstream media to a particular facet of the issue?  Are you really just pissed off and want to vent a little bit?  Is this article a catharsis, or is it communicative?  Be brutally honest with your intentions, it’s not as easy as you think.  Venting is okay.  So is vicious venting, but be careful what you dress it up as.
  • 2) In order to attain your goals, did you use data, or personalities?  If the former, are you citing the best, most current data you have available to you? Have you made a reasonable effort to check your data against any conflicting data that might be out there? If the latter, are you providing a mountain of evidence, and not just projecting onto personalities?  There is nothing inherently immoral or incorrect with going after the personalities.  But it is a very risky undertaking. You have to be damn sure you know what you’re talking about, and damn ready to defend yourself.  If you’re even a little loose with your claims, you will be called out for it, and a legal threat is very serious and stressful. So if you’re going after a personality, is it worth it?
  • 3) Are you letting the science speak for itself?  Are you editorializing?  Are you pointing out what part of your piece is data and what part is your opinion?
  • 4) If this piece was written in anger, frustration, or otherwise motivated by a powerful emotion, take a day.  Let your anger subside.  It will.  There are many cathartic enterprises out there, and you don’t need to react to the first one that comes your way.  Let someone else read your work before you share it with the internet.  Cooler heads definitely do think more clearly.
Weiye Loh

Haidt Requests Apology from Pigliucci « YourMorals.Org Moral Psychology Blog - 0 views

  • Here is my response to Pigliucci, which I posted as a comment on his blog. (Well, I submitted it as a comment on Feb 13 at 4pm EST, but he has not approved it yet, so it doesn’t show yet over there.)
  • Massimo Pigliucci, the chair of the philosophy department at CUNY-Lehman, wrote a critique of me on his blog, Rationally Speaking, in which he accused me of professional misconduct.
  • Dear Prof. Pigliucci: Let me be certain that I have understood you. You did not watch my talk, even though a link to it was embedded in the Tierney article. Instead, you picked out one piece of my argument (that the near-total absence of conservatives in social psychology is evidence of discrimination) and you made the standard response, the one that most bloggers have made: underrepresentation of any group is not, by itself, evidence of discrimination. That’s a good point; I made it myself quite explicitly in my talk: Of course there are many reasons why conservatives would be underrepresented in social psychology, and most of them have nothing to do with discrimination or hostile climate. Research on personality consistently shows that liberals are higher on openness to experience. They’re more interested in novel ideas, and in trying to use science to improve society. So of course our field is and always will be mostly liberal. I don’t think we should ever strive for exact proportional representation.
  • ...6 more annotations...
  • I made it clear that I’m not concerned about simple underrepresentation. I did not even make the moral argument that we need ideological diversity to right an injustice. Rather, I focused on what happens when a scientific community shares sacred values. A tribal moral community arises, one that actively suppresses ideas that are sacrilegious, and that discourages non-believers from entering. I argued that my field has become a tribal moral community, and the absence of conservatives (not just their underrepresentation) has serious consequences for the quality of our science. We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values. (
  • The fact that you criticized me without making an effort to understand me is not surprising.
  • Rather, what sets you apart from all other bloggers who are members of the academy is what you did next. You accused me of professional misconduct—lying, essentially–and you speculated as to my true motive: I suspect that Haidt is either an incompetent psychologist (not likely) or is disingenuously saying the sort of things controversial enough to get him in the New York Times (more likely).
  • As far as I can tell your evidence for these accusations is that my argument was so bad that I couldn’t have believed it myself. Here is how you justified your accusations: A serious social scientist doesn’t go around crying out discrimination just on the basis of unequal numbers. If that were the case, the NBA would be sued for discriminating against short people, dance companies against people without spatial coordination, and newspapers against dyslexics
  • Accusations of professional misconduct are sensibly made only if one has a reasonable and detailed understanding of the facts of the case, and can bring forth evidence of misconduct. Pigliucci has made no effort to acquire such an understanding, nor has he presented any evidence to support his accusation. He simply took one claim from the Tierney article and then ran wild with speculation about Haidt’s motives. It was pretty silly of him, and down right irresponsible of Pigliucci to publish that garbage without even knowing what Haidt said.
  • I challenge you to watch the video of my talk (click here) and then either 1) Retract your blog post and apologize publicly for calling me a liar or 2) State on your blog that you stand by your original post. If you do stand by your post, even after hearing my argument, then the world can decide for itself which of us is right, and which of us best models the ideals of science, philosophy, and the Enlightenment which you claim for yourself in the header of your blog, “Rationally Speaking.” Jonathan Haidt
Weiye Loh

Roger Pielke Jr.'s Blog: Global Temperature Trends - 0 views

  • My concern about the potential effects of human influences on the climate system are not a function of global average warming over a long-period of time or of predictions of continued warming into the future.
  • what maters are the effects of human influences on the climate system on human and ecological scales, not at the global scale. No one experiences global average temperature and it is very poorly correlated with things that we do care about in specific places at specific times.
  • Consider the following thought experiment. Divide the world up into 1,000 grid boxes of equal area. Now imagine that the temperature in each of 500 of those boxes goes up by 20 degrees while the temperature in the other 500 goes down by 20 degrees. The net global change is exactly zero (because I made it so). However, the impacts would be enormous. Let's further say that the changes prescribed in my thought experiment are the direct consequence of human activity. Would we want to address those changes? Or would we say, ho hum, it all averages out globally, so no problem? The answer is obvious and is not a function of what happens at some global average scale, but what happens at human and ecological scales.
  • ...2 more annotations...
  • In the real world, the effects of increasing carbon dioxide on human and ecological scales are well established, and they include a biogechemical effect on land ecosystems with subsequent effects on water and climate, as well as changes to the chemistry of the oceans. Is it possible that these effects are benign? Sure. Is it also possible that these effects have some negatives? Sure. These two factors alone would be sufficient for one to begin to ask questions about the worth of decarbonizing the global energy system. But greenhouse gas emissions also have a radiative effect that, in the real world, is thought to be a net warming, all else equal and over a global scale. However, if this effect were to be a net cooling, or even, no net effect at the global scale, it would not change my views about a need to consider decarbonizing the energy system one bit. There is an effect -- or effects to be more accurate -- and these effects could be negative.
  • The debate over climate change has many people on both sides of the issue wrapped up in discussing global average temperature trends. I understand this as it is an icon with great political symbolism. It has proved a convenient political battleground, but the reality is that it should matter little to the policy case for decarbonization. What matters is that there is a human effect on the climate system and it could be negative with respect to things people care about. That is enough to begin asking whether we want to think about accelerating decarbonization of the global economy.
  •  
    one needs to know only two things about the science of climate change to begin asking whether accelerating decarbonization of the economy might be worth doing: Carbon dioxide has an influence on the climate system. This influence might well be negative for things many people care about. That is it. An actual decision to accelerate decarbonization and at what rate will depend on many other things, like costs and benefits of particular actions unrelated to climate and technological alternatives. In this post I am going to further explain my views, based on an interesting question posed in that earlier thread. What would my position be if it were to be shown, hypothetically, that the global average surface temperature was not warming at all, or in fact even cooling (over any relevant time period)? Would I then change my views on the importance of decarbonizing the global energy system?
Chen Guo Lim

YouTube - Mika - Lady Jane - 0 views

shared by Chen Guo Lim on 26 Aug 09 - Cached
  •  
    while I was watching this video, I suddenly had a desire to share this video with my friends. Then I realised that there are serious ethics issues here. Such is the life of a NM4204 student. 1. Is it alright to video a clip of a live performance? Seeing as I have just spent a couple of hundreds on a ticket, surely I am allowed to bring home some memories. Leaving uploading online aside, is the act of recording infringing on rights? Seeing as it does not harm either party if the clip is stroed in my device, and I viewed at my own time. 2. By us (me that is to say) sharing this file while everyone in the class, have I stepped into the boundaries of infringing on copyrights, seeing as the playback of this clip asynchronously can constitute as a public performance right? In any case, enjoy this song first before you think about these. One of my favourite artist.
Weiye Loh

The Way We Live Now - I Tweet, Therefore I Am - NYTimes.com - 0 views

  • Each Twitter post seemed a tacit referendum on who I am, or at least who I believe myself to be. The grocery-store episode telegraphed that I was tuned in to the Seinfeldian absurdities of life; my concern about women’s victimization, however sincere, signaled that I also have a soul. Together they suggest someone who is at once cynical and compassionate, petty yet deep. Which, in the end, I’d say, is pretty accurate.
  • Distilling my personality provided surprising focus, making me feel stripped to my essence. It forced me, for instance, to pinpoint the dominant feeling as I sat outside with my daughter listening to E.B. White. Was it my joy at being a mother? Nostalgia for my own childhood summers? The pleasures of listening to the author’s quirky, underinflected voice? Each put a different spin on the occasion, of who I was within it. Yet the final decision (“Listening to E.B. White’s ‘Trumpet of the Swan’ with Daisy. Slow and sweet.”) was not really about my own impressions: it was about how I imagined — and wanted — others to react to them. That gave me pause. How much, I began to wonder, was I shaping my Twitter feed, and how much was Twitter shaping me?
  • sociologist Erving Goffman famously argued that all of life is performance: we act out a role in every interaction, adapting it based on the nature of the relationship or context at hand. Twitter has extended that metaphor to include aspects of our experience that used to be considered off-set: eating pizza in bed, reading a book in the tub, thinking a thought anywhere, flossing. Effectively, it makes the greasepaint permanent, blurring the lines not only between public and private but also between the authentic and contrived self. If all the world was once a stage, it has now become a reality TV show: we mere players are not just aware of the camera; we mug for it.
  • ...3 more annotations...
  • Second Life, Facebook, MySpace, Twitter — has shifted not only how we spend our time but also how we construct identity. For her coming book, “Alone Together,” Sherry Turkle, a professor at M.I.T., interviewed more than 400 children and parents about their use of social media and cellphones. Among young people especially she found that the self was increasingly becoming externally manufactured rather than internally developed: a series of profiles to be sculptured and refined in response to public opinion. “On Twitter or Facebook you’re trying to express something real about who you are,” she explained. “But because you’re also creating something for others’ consumption, you find yourself imagining and playing to your audience more and more. So those moments in which you’re supposed to be showing your true self become a performance. Your psychology becomes a performance.” Referring to “The Lonely Crowd,” the landmark description of the transformation of the American character from inner- to outer-directed, Turkle added, “Twitter is outer-directedness cubed.”
  • when every thought is externalized, what becomes of insight? When we reflexively post each feeling, what becomes of reflection? When friends become fans, what happens to intimacy? The risk of the performance culture, of the packaged self, is that it erodes the very relationships it purports to create, and alienates us from our own humanity.
  • I am trying to gain some perspective on the perpetual performer’s self-consciousness. That involves trying to sort out the line between person and persona, the public and private self.
  •  
    THE WAY WE LIVE NOW I Tweet, Therefore I Am
Weiye Loh

RealClimate: Feedback on Cloud Feedback - 0 views

  • I have a paper in this week’s issue of Science on the cloud feedback
  • clouds are important regulators of the amount of energy in and out of the climate system. Clouds both reflect sunlight back to space and trap infrared radiation and keep it from escaping to space. Changes in clouds can therefore have profound impacts on our climate.
  • A positive cloud feedback loop posits a scenario whereby an initial warming of the planet, caused, for example, by increases in greenhouse gases, causes clouds to trap more energy and lead to further warming. Such a process amplifies the direct heating by greenhouse gases. Models have been long predicted this, but testing the models has proved difficult.
  • ...8 more annotations...
  • Making the issue even more contentious, some of the more credible skeptics out there (e.g., Lindzen, Spencer) have been arguing that clouds behave quite differently from that predicted by models. In fact, they argue, clouds will stabilize the climate and prevent climate change from occurring (i.e., clouds will provide a negative feedback).
  • In my new paper, I calculate the energy trapped by clouds and observe how it varies as the climate warms and cools during El Nino-Southern Oscillation (ENSO) cycles. I find that, as the climate warms, clouds trap an additional 0.54±0.74W/m2 for every degree of warming. Thus, the cloud feedback is likely positive, but I cannot rule out a slight negative feedback.
  • while a slight negative feedback cannot be ruled out, the data do not support a negative feedback large enough to substantially cancel the well-established positive feedbacks, such as water vapor, as Lindzen and Spencer would argue.
  • I have also compared the results to climate models. Taken as a group, the models substantially reproduce the observations. This increases my confidence that the models are accurately simulating the variations of clouds with climate change.
  • Dr. Spencer is arguing that clouds are causing ENSO cycles, so the direction of causality in my analysis is incorrect and my conclusions are in error. After reading this, I initiated a cordial and useful exchange of e-mails with Dr. Spencer (you can read the full e-mail exchange here). We ultimately agreed that the fundamental disagreement between us is over what causes ENSO. Short paraphrase: Spencer: ENSO is caused by clouds. You cannot infer the response of clouds to surface temperature in such a situation. Dessler: ENSO is not caused by clouds, but is driven by internal dynamics of the ocean-atmosphere system. Clouds may amplify the warming, and that’s the cloud feedback I’m trying to measure.
  • My position is the mainstream one, backed up by decades of research. This mainstream theory is quite successful at simulating almost all of the aspects of ENSO. Dr. Spencer, on the other hand, is as far out of the mainstream when it comes to ENSO as he is when it comes to climate change. He is advancing here a completely new and untested theory of ENSO — based on just one figure in one of his papers (and, as I told him in one of our e-mails, there are other interpretations of those data that do not agree with his interpretation). Thus, the burden of proof is Dr. Spencer to show that his theory of causality during ENSO is correct. He is, at present, far from meeting that burden. And until Dr. Spencer satisfies this burden, I don’t think anyone can take his criticisms seriously.
  • It’s also worth noting that the picture I’m painting of our disagreement (and backed up by the e-mail exchange linked above) is quite different from the picture provided by Dr. Spencer on his blog. His blog is full of conspiracies and purposeful suppression of the truth. In particular, he accuses me of ignoring his work. But as you can see, I have not ignored it — I have dismissed it because I think it has no merit. That’s quite different. I would also like to respond to his accusation that the timing of the paper is somehow connected to the IPCC’s meeting in Cancun. I can assure everyone that no one pressured me in any aspect of the publication of this paper. As Dr. Spencer knows well, authors have no control over when a paper ultimately gets published. And as far as my interest in influencing the policy debate goes, I’ll just say that I’m in College Station this week, while Dr. Spencer is in Cancun. In fact, Dr. Spencer had a press conference in Cancun — about my paper. I didn’t have a press conference about my paper. Draw your own conclusion.
  • This is but another example of how climate scientists are being played by the denialists. You attempted to discuss the issue with Spencer as if he were only doing science. But he is not. He is doing science and politics, and he has no compunction about sandbagging you. There is no gain to you in trying to deal with people like Spencer and Lindzen as colleagues. They are not trustworthy.
yongernn teo

Ethics and Values Case Study- Mercy Killing, Euthanasia - 8 views

  •  
    THE ETHICAL PROBLEM: Allowing someone to die, mercy death, and mercy killing, Euthanasia: A 24-year-old man named Robert who has a wife and child is paralyzed from the neck down in a motorcycle accident. He has always been very active and hates the idea of being paralyzed. He also is in a great deal of pain, an he has asked his doctors and other members of his family to "put him out of his misery." After several days of such pleading, his brother comes into Robert's hospital ward and asks him if he is sure he still wants to be put out of his misery. Robert says yes and pleads with his brother to kill him. The brother kisses and blesses Robert, then takes out a gun and shoots him, killing him instantly. The brother later is tried for murder and acquitted by reason of temporary insanity. Was what Robert's brother did moral? Do you think he should have been brought to trial at all? Do you think he should have been acquitted? Would you do the same for a loved one if you were asked? THE DISCUSSION: In my opinion, the most dubious part about the case would be the part on Robert pleading with his brother, asking his brother to kill him. This could be his brother's own account of the incident and could/could not have been a plea by Robert. 1) With assumption that Robert indeed pleaded with his brother to kill him, an ethical analysis as such could be derived: That Robert's brother was only respecting Robert's choice and killed him because he wanted to relieve him from his misery. This could be argued to be ethical using a teleoloigical framework where the focus is on the end-result and the consequences that entails the action. Here, although the act of killing per se may be wrong and illegal, Robert was able to relieved of his pain and suffering. 2) With an assumption that Robert did not plea with his brother to kill him and that it was his brother's own decision to relieve Robert of all-suffering: In this case, the b
  • ...2 more comments...
  •  
    I find euthanasia to be a very interesting ethical dilemma. Even I myself am caught in the middle. Euthanasia has been termed as 'mercy killing' and even 'happy death'. Others may simply just term it as being 'evil'. Is it right to end someone's life even when he or she pleads you to do so? In the first place, is it even right to commit suicide? Once someone pulls off the main support that's keeping the person alive, such as the feeding tube, there is no turning back. Hmm..Come to think of it, technology is kind of unethical by being made available, for in the past, when someone is dying, they had the right to die naturally. Now, scientific technology is 'forcing' us to stay alive and cling on to a life that may be deemed being worthless if we were standing outside our bodies looking at our comatose selves. Then again, this may just be MY personal standpoint. But I have to argue, who gave technology the right to make me a worthless vegetable!(and here I am, attaching a value/judgement onto an immobile human being..) Hence, being incompetent in making decisions for my unconscious self (or perhaps even brain dead), who should take responsibility for my life, for my existence? And on what basis are they allowed to help me out? Taking the other side of the argument, against euthanasia, we can say that the act of ending someone else's life is the act of destroying societal respect for life. Based on the utilitarian perspective, we are not thinking of the overall beneficence for society and disregarding the moral considerations encompassed within the state's interest to preserve the sanctity of all life. It has been said that life in itself takes priority over all other values. We should let the person live so as to give him/her a chance to wake up or hope for recovery (think comatose patients). But then again we can also argue that life is not the top of the hierarchy! A life without rights is as if not living a life at all? By removing the patient
  •  
    as a human being, you supposedly have a right to live, whether you are mobile or immobile. however, i think that, in the case of euthanasia, you 'give up' your rights when you "show" that you are no longer able to serve the pre-requisites of having the right. for example, if "living" rights are equate to you being able to talk, walk, etc etc, then, obviously the opposite means you no longer are able to perform up to the expectations of that right. then again, it is very subjective as to who gets to make that criteria!
  •  
    hmm interesting.. however, a question i have is who and when can this "right" be "given up"? when i am a victim in a car accident, and i lost the ability to breathe, walk and may need months to recover. i am unconscious and the doctor is unable to determine when am i gonna regain consciousness. when should my parents decide i can no longer be able to have any living rights? and taking elaine's point into consideration, is committing suicide even 'right'? if it is legally not right, when i ask someone to take my life and wrote a letter that it was cus i wanted to die, does that make it committing suicide only in the hands of others?
  •  
    Similarly, I question the 'rights' that you have to 'give up' when you no longer 'serve the pre-requisites of having the right'. If the living rights means being able to talk and walk, then where does it leave infants? Where does it leave people who may be handicapped? Have their lost their rights to living?
Weiye Loh

Edge: HOW DOES OUR LANGUAGE SHAPE THE WAY WE THINK? By Lera Boroditsky - 0 views

  • Do the languages we speak shape the way we see the world, the way we think, and the way we live our lives? Do people who speak different languages think differently simply because they speak different languages? Does learning new languages change the way you think? Do polyglots think differently when speaking different languages?
  • For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in my labs at Stanford University and at MIT has helped reopen this question. We have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia.
  • What we have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world.
  • ...15 more annotations...
  • Suppose you want to say, "Bush read Chomsky's latest book." Let's focus on just the verb, "read." To say this sentence in English, we have to mark the verb for tense; in this case, we have to pronounce it like "red" and not like "reed." In Indonesian you need not (in fact, you can't) alter the verb to mark tense. In Russian you would have to alter the verb to indicate tense and gender. So if it was Laura Bush who did the reading, you'd use a different form of the verb than if it was George. In Russian you'd also have to include in the verb information about completion. If George read only part of the book, you'd use a different form of the verb than if he'd diligently plowed through the whole thing. In Turkish you'd have to include in the verb how you acquired this information: if you had witnessed this unlikely event with your own two eyes, you'd use one verb form, but if you had simply read or heard about it, or inferred it from something Bush said, you'd use a different verb form.
  • Clearly, languages require different things of their speakers. Does this mean that the speakers think differently about the world? Do English, Indonesian, Russian, and Turkish speakers end up attending to, partitioning, and remembering their experiences differently just because they speak different languages?
  • For some scholars, the answer to these questions has been an obvious yes. Just look at the way people talk, they might say. Certainly, speakers of different languages must attend to and encode strikingly different aspects of the world just so they can use their language properly. Scholars on the other side of the debate don't find the differences in how people talk convincing. All our linguistic utterances are sparse, encoding only a small part of the information we have available. Just because English speakers don't include the same information in their verbs that Russian and Turkish speakers do doesn't mean that English speakers aren't paying attention to the same things; all it means is that they're not talking about them. It's possible that everyone thinks the same way, notices the same things, but just talks differently.
  • Believers in cross-linguistic differences counter that everyone does not pay attention to the same things: if everyone did, one might think it would be easy to learn to speak other languages. Unfortunately, learning a new language (especially one not closely related to those you know) is never easy; it seems to require paying attention to a new set of distinctions. Whether it's distinguishing modes of being in Spanish, evidentiality in Turkish, or aspect in Russian, learning to speak these languages requires something more than just learning vocabulary: it requires paying attention to the right things in the world so that you have the correct information to include in what you say.
  • Follow me to Pormpuraaw, a small Aboriginal community on the western edge of Cape York, in northern Australia. I came here because of the way the locals, the Kuuk Thaayorre, talk about space. Instead of words like "right," "left," "forward," and "back," which, as commonly used in English, define space relative to an observer, the Kuuk Thaayorre, like many other Aboriginal groups, use cardinal-direction terms — north, south, east, and west — to define space.1 This is done at all scales, which means you have to say things like "There's an ant on your southeast leg" or "Move the cup to the north northwest a little bit." One obvious consequence of speaking such a language is that you have to stay oriented at all times, or else you cannot speak properly. The normal greeting in Kuuk Thaayorre is "Where are you going?" and the answer should be something like " Southsoutheast, in the middle distance." If you don't know which way you're facing, you can't even get past "Hello."
  • The result is a profound difference in navigational ability and spatial knowledge between speakers of languages that rely primarily on absolute reference frames (like Kuuk Thaayorre) and languages that rely on relative reference frames (like English).2 Simply put, speakers of languages like Kuuk Thaayorre are much better than English speakers at staying oriented and keeping track of where they are, even in unfamiliar landscapes or inside unfamiliar buildings. What enables them — in fact, forces them — to do this is their language. Having their attention trained in this way equips them to perform navigational feats once thought beyond human capabilities. Because space is such a fundamental domain of thought, differences in how people think about space don't end there. People rely on their spatial knowledge to build other, more complex, more abstract representations. Representations of such things as time, number, musical pitch, kinship relations, morality, and emotions have been shown to depend on how we think about space. So if the Kuuk Thaayorre think differently about space, do they also think differently about other things, like time? This is what my collaborator Alice Gaby and I came to Pormpuraaw to find out.
  • To test this idea, we gave people sets of pictures that showed some kind of temporal progression (e.g., pictures of a man aging, or a crocodile growing, or a banana being eaten). Their job was to arrange the shuffled photos on the ground to show the correct temporal order. We tested each person in two separate sittings, each time facing in a different cardinal direction. If you ask English speakers to do this, they'll arrange the cards so that time proceeds from left to right. Hebrew speakers will tend to lay out the cards from right to left, showing that writing direction in a language plays a role.3 So what about folks like the Kuuk Thaayorre, who don't use words like "left" and "right"? What will they do? The Kuuk Thaayorre did not arrange the cards more often from left to right than from right to left, nor more toward or away from the body. But their arrangements were not random: there was a pattern, just a different one from that of English speakers. Instead of arranging time from left to right, they arranged it from east to west. That is, when they were seated facing south, the cards went left to right. When they faced north, the cards went from right to left. When they faced east, the cards came toward the body and so on. This was true even though we never told any of our subjects which direction they faced. The Kuuk Thaayorre not only knew that already (usually much better than I did), but they also spontaneously used this spatial orientation to construct their representations of time.
  • I have described how languages shape the way we think about space, time, colors, and objects. Other studies have found effects of language on how people construe events, reason about causality, keep track of number, understand material substance, perceive and experience emotion, reason about other people's minds, choose to take risks, and even in the way they choose professions and spouses.8 Taken together, these results show that linguistic processes are pervasive in most fundamental domains of thought, unconsciously shaping us from the nuts and bolts of cognition and perception to our loftiest abstract notions and major life decisions. Language is central to our experience of being human, and the languages we speak profoundly shape the way we think, the way we see the world, the way we live our lives.
  • The fact that even quirks of grammar, such as grammatical gender, can affect our thinking is profound. Such quirks are pervasive in language; gender, for example, applies to all nouns, which means that it is affecting how people think about anything that can be designated by a noun.
  • How does an artist decide whether death, say, or time should be painted as a man or a woman? It turns out that in 85 percent of such personifications, whether a male or female figure is chosen is predicted by the grammatical gender of the word in the artist's native language. So, for example, German painters are more likely to paint death as a man, whereas Russian painters are more likely to paint death as a woman.
  • Does treating chairs as masculine and beds as feminine in the grammar make Russian speakers think of chairs as being more like men and beds as more like women in some way? It turns out that it does. In one study, we asked German and Spanish speakers to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key" — a word that is masculine in German and feminine in Spanish — the German speakers were more likely to use words like "hard," "heavy," "jagged," "metal," "serrated," and "useful," whereas Spanish speakers were more likely to say "golden," "intricate," "little," "lovely," "shiny," and "tiny." To describe a "bridge," which is feminine in German and masculine in Spanish, the German speakers said "beautiful," "elegant," "fragile," "peaceful," "pretty," and "slender," and the Spanish speakers said "big," "dangerous," "long," "strong," "sturdy," and "towering." This was true even though all testing was done in English, a language without grammatical gender. The same pattern of results also emerged in entirely nonlinguistic tasks (e.g., rating similarity between pictures). And we can also show that it is aspects of language per se that shape how people think: teaching English speakers new grammatical gender systems influences mental representations of objects in the same way it does with German and Spanish speakers. Apparently even small flukes of grammar, like the seemingly arbitrary assignment of gender to a noun, can have an effect on people's ideas of concrete objects in the world.
  • Even basic aspects of time perception can be affected by language. For example, English speakers prefer to talk about duration in terms of length (e.g., "That was a short talk," "The meeting didn't take long"), while Spanish and Greek speakers prefer to talk about time in terms of amount, relying more on words like "much" "big", and "little" rather than "short" and "long" Our research into such basic cognitive abilities as estimating duration shows that speakers of different languages differ in ways predicted by the patterns of metaphors in their language. (For example, when asked to estimate duration, English speakers are more likely to be confused by distance information, estimating that a line of greater length remains on the test screen for a longer period of time, whereas Greek speakers are more likely to be confused by amount, estimating that a container that is fuller remains longer on the screen.)
  • An important question at this point is: Are these differences caused by language per se or by some other aspect of culture? Of course, the lives of English, Mandarin, Greek, Spanish, and Kuuk Thaayorre speakers differ in a myriad of ways. How do we know that it is language itself that creates these differences in thought and not some other aspect of their respective cultures? One way to answer this question is to teach people new ways of talking and see if that changes the way they think. In our lab, we've taught English speakers different ways of talking about time. In one such study, English speakers were taught to use size metaphors (as in Greek) to describe duration (e.g., a movie is larger than a sneeze), or vertical metaphors (as in Mandarin) to describe event order. Once the English speakers had learned to talk about time in these new ways, their cognitive performance began to resemble that of Greek or Mandarin speakers. This suggests that patterns in a language can indeed play a causal role in constructing how we think.6 In practical terms, it means that when you're learning a new language, you're not simply learning a new way of talking, you are also inadvertently learning a new way of thinking. Beyond abstract or complex domains of thought like space and time, languages also meddle in basic aspects of visual perception — our ability to distinguish colors, for example. Different languages divide up the color continuum differently: some make many more distinctions between colors than others, and the boundaries often don't line up across languages.
  • To test whether differences in color language lead to differences in color perception, we compared Russian and English speakers' ability to discriminate shades of blue. In Russian there is no single word that covers all the colors that English speakers call "blue." Russian makes an obligatory distinction between light blue (goluboy) and dark blue (siniy). Does this distinction mean that siniy blues look more different from goluboy blues to Russian speakers? Indeed, the data say yes. Russian speakers are quicker to distinguish two shades of blue that are called by the different names in Russian (i.e., one being siniy and the other being goluboy) than if the two fall into the same category. For English speakers, all these shades are still designated by the same word, "blue," and there are no comparable differences in reaction time. Further, the Russian advantage disappears when subjects are asked to perform a verbal interference task (reciting a string of digits) while making color judgments but not when they're asked to perform an equally difficult spatial interference task (keeping a novel visual pattern in memory). The disappearance of the advantage when performing a verbal task shows that language is normally involved in even surprisingly basic perceptual judgments — and that it is language per se that creates this difference in perception between Russian and English speakers.
  • What it means for a language to have grammatical gender is that words belonging to different genders get treated differently grammatically and words belonging to the same grammatical gender get treated the same grammatically. Languages can require speakers to change pronouns, adjective and verb endings, possessives, numerals, and so on, depending on the noun's gender. For example, to say something like "my chair was old" in Russian (moy stul bil' stariy), you'd need to make every word in the sentence agree in gender with "chair" (stul), which is masculine in Russian. So you'd use the masculine form of "my," "was," and "old." These are the same forms you'd use in speaking of a biological male, as in "my grandfather was old." If, instead of speaking of a chair, you were speaking of a bed (krovat'), which is feminine in Russian, or about your grandmother, you would use the feminine form of "my," "was," and "old."
  •  
    For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in my labs at Stanford University and at MIT has helped reopen this question. We have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia. What we have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world. Language is a uniquely human gift, central to our experience of being human. Appreciating its role in constructing our mental lives brings us one step closer to understanding the very nature of humanity.
Weiye Loh

Rationally Speaking: The sorry state of higher education - 0 views

  • two disconcerting articles crossed my computer screen, both highlighting the increasingly sorry state of higher education, though from very different perspectives. The first is “Ed Dante’s” (actually a pseudonym) piece in the Chronicle of Higher Education, entitled The Shadow Scholar. The second is Gregory Petsko’s A Faustian Bargain, published of all places in Genome Biology.
  • There is much to be learned by educators in the Shadow Scholar piece, except the moral that “Dante” would like us to take from it. The anonymous author writes:“Pointing the finger at me is too easy. Why does my business thrive? Why do so many students prefer to cheat rather than do their own work? Say what you want about me, but I am not the reason your students cheat.
  • The point is that plagiarism and cheating happen for a variety of reasons, one of which is the existence of people like Mr. Dante and his company, who set up a business that is clearly unethical and should be illegal. So, pointing fingers at him and his ilk is perfectly reasonable. Yes, there obviously is a “market” for cheating in higher education, and there are complex reasons for it, but he is in a position similar to that of the drug dealer who insists that he is simply providing the commodity to satisfy society’s demand. Much too easy of a way out, and one that doesn’t fly in the case of drug dealers, and shouldn’t fly in the case of ghost cheaters.
  • ...16 more annotations...
  • As a teacher at the City University of New York, I am constantly aware of the possibility that my students might cheat on their tests. I do take some elementary precautionary steps
  • Still, my job is not that of the policeman. My students are adults who theoretically are there to learn. If they don’t value that learning and prefer to pay someone else to fake it, so be it, ultimately it is they who lose in the most fundamental sense of the term. Just like drug addicts, to return to my earlier metaphor. And just as in that other case, it is enablers like Mr. Dante who simply can’t duck the moral blame.
  • n open letter to the president of SUNY-Albany, penned by molecular biologist Gregory Petsko. The SUNY-Albany president has recently announced the closing — for budgetary reasons — of the departments of French, Italian, Classics, Russian and Theater Arts at his university.
  • Petsko begins by taking on one of the alleged reasons why SUNY-Albany is slashing the humanities: low enrollment. He correctly points out that the problem can be solved overnight at the stroke of a pen: stop abdicating your responsibilities as educators and actually put constraints on what your students have to take in order to graduate. Make courses in English literature, foreign languages, philosophy and critical thinking, the arts and so on, mandatory or one of a small number of options that the students must consider in order to graduate.
  • But, you might say, that’s cheating the market! Students clearly don’t want to take those courses, and a business should cater to its customers. That type of reasoning is among the most pernicious and idiotic I’ve ever heard. Students are not clients (if anything, their parents, who usually pay the tuition, are), they are not shopping for a new bag or pair of shoes. They do not know what is best for them educationally, that’s why they go to college to begin with. If you are not convinced about how absurd the students-as-clients argument is, consider an analogy: does anyone with functioning brain cells argue that since patients in a hospital pay a bill, they should be dictating how the brain surgeon operates? I didn’t think so.
  • Petsko then tackles the second lame excuse given by the president of SUNY-Albany (and common among the upper administration of plenty of public universities): I can’t do otherwise because of the legislature’s draconian cuts. Except that university budgets are simply too complicated for there not to be any other option. I know this first hand, I’m on a special committee at my own college looking at how to creatively deal with budget cuts handed down to us from the very same (admittedly small minded and dysfunctional) New York state legislature that has prompted SUNY-Albany’s action. As Petsko points out, the president there didn’t even think of involving the faculty and staff in a broad discussion of how to deal with the crisis, he simply announced the cuts on a Friday afternoon and then ran for cover. An example of very poor leadership to say the least, and downright hypocrisy considering all the talk that the same administrator has been dishing out about the university “community.”
  • Finally, there is the argument that the humanities don’t pay for their own way, unlike (some of) the sciences (some of the time). That is indubitably true, but irrelevant. Universities are not businesses, they are places of higher learning. Yes, of course they need to deal with budgets, fund raising and all the rest. But the financial and administrative side has one goal and one goal only: to provide the best education to the students who attend that university.
  • That education simply must include the sciences, philosophy, literature, and the arts, as well as more technical or pragmatic offerings such as medicine, business and law. Why? Because that’s the kind of liberal education that makes for an informed and intelligent citizenry, without which our democracy is but empty talk, and our lives nothing but slavery to the marketplace.
  • Maybe this is not how education works in the US. I thought that general (or compulsory) education (ie. up to high school) is designed to make sure that citizens in a democratic country can perform their civil duties. A balanced and well-rounded education, which includes a healthy mixture of science and humanities, is indeed very important for this purpose. However, college-level education is for personal growth and therefore the person must have a large say about what kind of classes he or she chooses to take. I am disturbed by Massimo's hospital analogy. Students are not ill. They don't go to college to be cured, or to be good citizens. They go to college to learn things that *they* want to learn. Patients are passive. Students are not.I agree that students typically do not know what kind of education is good for them. But who does?
  • students do have a saying in their education. They pick their major, and there are electives. But I object to the idea that they can customize their major any way they want. That assumes they know what the best education for them is, they don't. That's the point of education.
  • The students are in your class to get a good grade, any learning that takes place is purely incidental. Those good grades will look good on their transcript and might convince a future employer that they are smart and thus are worth paying more.
  • I don't know what the dollar to GPA exchange rate is these days, but I don't doubt that there is one.
  • Just how many of your students do you think will remember the extensive complex jargon of philosophy more than a couple of months after they leave your classroom?
  • and our lives nothing but slavery to the marketplace.We are there. Welcome. Where have you been all this time? In a capitalistic/plutocratic society money is power (and free speech too according to the supreme court). Money means a larger/better house/car/clothing/vacation than your neighbor and consequently better mating opportunities. You can mostly blame the women for that one I think just like the peacock's tail.
  • If a student of surgery fails to learn they might maim, kill or cripple someone. If an engineer of airplanes fails to learn they might design a faulty aircraft that fails and kills people. If a student of chemistry fails to learn they might design a faulty drug with unintended and unfortunate side effects, but what exactly would be the harm if a student of philosophy fails to learn Aristotle had to say about elements or Plato had to say about perfect forms? These things are so divorced from people's everyday activities as to be rendered all but meaningless.
  • human knowledge grows by leaps and bounds every day, but human brain capacity does not, so the portion of human knowledge you can personally hold gets smaller by the minute. Learn (and remember) as much as you can as fast as you can and you will still lose ground. You certainly have your work cut out for you emphasizing the importance of Thales in the Age of Twitter and whatever follows it next year.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Want your opinions distorted and misrepresented? Write in to The Straits Time... - 0 views

  • Letter sent by by my good friend Samuel C. Wee to ST on the 8th of March, quoting statistics from their Page One infographic: (Read this closely!) I read with keen interest the news that social mobility in Singapore’s education system is still alive and well (“School system still ‘best way to move up’”; Monday). It is indeed heartwarming to learn that only 90% of children from one-to-three-room flats do not make it to university. I firmly agree with our Education Minister Dr Ng Eng Hen, who declared that “education remains the great social leveller in Singaporean society”. His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination. In recent years, there has been much debate about elitism and the impact that a family’s financial background has on a child’s educational prospects. Therefore, it was greatly reassuring to read about Dr Ng’s great faith in our “unique, meritocratic Singapore system”, which ensures that good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students. I would like to commend Ms Rachel Chang on her outstanding article. On behalf of the financially disadvantaged students of Singapore, I thank the fine journalists of the Straits Times for their tireless work in bringing to Singaporeans accurate and objective reporting.
  • What was actually published last Friday, March 18th 2011 A reassuring experience of meritocratic system I READ with keen interest the news that social mobility in Singapore’s education system is still alive and well (‘School system still ‘best way to move up”; March 8). It is indeed heartwarming to learn that almost 50 per cent of children from one- to three-room flats make it to university and polytechnics. I firmly agree with Education Minister Ng Eng Hen, who said that education remains the great social leveller in Singapore society. His statement is backed by the statistic that about 50 per cent of children from the bottom third of the socio-economic bracket score within the top two-thirds of their Primary School Leaving Examination cohort. There has been much debate about elitism and the impact that a family’s financial background has on a child’s educational prospects. Therefore, it was reassuring to read about Dr Ng’s own experience of the ‘unique, meritocratic Singapore system’: he grew up in a three-room flat with five other siblings, and his medical studies at the National University of Singapore were heavily subsidised; later, he trained as a cancer surgeon in the United States using a government scholarship. The system also ensures that good, able students from the middle- and high-income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students.
  • To give me the byline would be an outrageous flattery and a gross injustice to the forum editors of ST, who took the liberty of taking my observations about the statistics and subtly replacing them with more politically correct (but significantly and essentially different) statistics.
  • ...3 more annotations...
  • Of course, ST reserves the right to edit my letter for clarity and length. When said statistics in question were directly taken from their original article, though, one has to wonder if there hasn’t been a breakdown in communication over there. I’m dreadfully sorry, forum editors, I should have double-checked my original source (your journalist Ms Rachel Chang) before sending my letter.
  • take a look at how my pride in our meritocratic system in my originally letter has been transfigured into awe at Dr Ng’s background, for example! Dear friends, when an editor takes the time and effort to not just paraphrase but completely and utterly transform your piece in both intent and meaning, then what can we say but bravo.
  • There are surely no lazy slackers over at the Straits Times; instead we have evidently men and women who dedicate time and effort to correct their misguided readers, and protect them from the shame of having their real opinions published.
Weiye Loh

Have you heard of the Koch Brothers? | the kent ridge common - 0 views

  • I return to the Guardian online site expressly to search for those elusive articles on Wisconsin. The main page has none. I click on News – US, and there are none. I click on ‘Commentary is Free’- US, and find one article on protests in Ohio. I go to the New York Times online site. Earlier, on my phone, I had seen one article at the bottom of the main page on Wisconsin. By the time I managed to get on my computer to find it again however, the NYT main page was quite devoid of any articles on the protests at all. I am stumped; clearly, I have to reconfigure my daily news sources and reading diet.
  • It is not that the media is not covering the protests in Wisconsin at all – but effective media coverage in the US at least, in my view, is as much about volume as it is about substantive coverage. That week, more prime-time slots and the bulk of the US national attention were given to Charlie Sheen and his crazy antics (whatever they were about, I am still not too sure) than to Libya and the rest of the Middle East, or more significantly, to a pertinent domestic issue, the teacher protests  - not just in Wisconsin but also in other cities in the north-eastern part of the US.
  • In the March 2nd episode of The Colbert Report, it was shown that the Fox News coverage of the Wisconsin protests had re-used footage from more violent protests in California (the palm trees in the background gave Fox News away). Bill O’Reilly at Fox News had apparently issued an apology – but how many viewers who had seen the footage and believed it to be on-the-ground footage of Wisconsin would have followed-up on the report and the apology? And anyway, why portray the teacher protests as violent?
  • ...12 more annotations...
  • In this New York Times’ article, “Teachers Wonder, Why the scorn?“, the writer notes the often scathing comments from counter-demonstrators – “Oh you pathetic teachers, read the online comments and placards of counterdemonstrators. You are glorified baby sitters who leave work at 3 p.m. You deserve minimum wage.” What had begun as an ostensibly ‘economic reform’ targeted at teachers’ unions has gradually transmogrified into a kind of “character attack” to this section of American society – teachers are people who wage violent protests (thanks to borrowed footage from the West Coast) and they are undeserving of their economic benefits, and indeed treat these privileges as ‘rights’. The ‘war’ is waged on multiple fronts, economic, political, social, psychological even — or at least one gets this sort of picture from reading these articles.
  • as Singaporeans with a uniquely Singaporean work ethic, we may perceive functioning ‘trade unions’ as those institutions in the so-called “West” where they amass lots of membership, then hold the government ‘hostage’ in order to negotiate higher wages and benefits. Think of trade unions in the Singaporean context, and I think of SIA pilots. And of LKY’s various firm and stern comments on those issues. Think of trade unions and I think of strikes in France, in South Korea, when I was younger, and of my mum saying, “How irresponsible!” before flipping the TV channel.
  • The reason why I think the teachers’ protests should not be seen solely as an issue about trade-unions, and evaluated myopically and naively in terms of whether trade unions are ‘good’ or ‘bad’ is because the protests feature in a larger political context with the billionaire Koch brothers at the helm, financing and directing much of what has transpired in recent weeks. Or at least according to certain articles which I present here.
  • In this NYT article entitled “Billionaire Brothers’ Money Plays Role in Wisconsin Dispute“, the writer noted that Koch Industries had been “one of the biggest contributors to the election campaign of Gov. Scott Walker of Wisconsin, a Republican who has championed the proposed cuts.” Further, the president of Americans for Prosperity, a nonprofit group financed by the Koch brothers, had reportedly addressed counter-demonstrators last Saturday saying that “the cuts were not only necessary, but they also represented the start of a much-needed nationwide move to slash public-sector union benefits.” and in his own words -“ ‘We are going to bring fiscal sanity back to this great nation’ ”. All this rhetoric would be more convincing to me if they weren’t funded by the same two billionaires who financially enabled Walker’s governorship.
  • I now refer you to a long piece by Jane Mayer for The New Yorker titled, “Covert Operations: The billionaire brothers who are waging a war against Obama“. According to her, “The Kochs are longtime libertarians who believe in drastically lower personal and corporate taxes, minimal social services for the needy, and much less oversight of industry—especially environmental regulation. These views dovetail with the brothers’ corporate interests.”
  • Their libertarian modus operandi involves great expenses in lobbying, in political contributions and in setting up think tanks. From 2006-2010, Koch Industries have led energy companies in political contributions; “[i]n the second quarter of 2010, David Koch was the biggest individual contributor to the Republican Governors Association, with a million-dollar donation.” More statistics, or at least those of the non-anonymous donation records, can be found on page 5 of Mayer’s piece.
  • Naturally, the Democrats also have their billionaire donors, most notably in the form of George Soros. Mayer writes that he has made ‘generous private contributions to various Democratic campaigns, including Obama’s.” Yet what distinguishes him from the Koch brothers here is, as Michael Vachon, his spokesman, argued, ‘that Soros’s giving is transparent, and that “none of his contributions are in the service of his own economic interests.” ‘ Of course, this must be taken with a healthy dose of salt, but I will note here that in Charles Ferguson’s documentary Inside Job, which was about the 2008 financial crisis, George Soros was one of those interviewed who was not portrayed negatively. (My review of it is here.)
  • Of the Koch brothers’ political investments, what interested me more was the US’ “first libertarian thinktank”, the Cato Institute. Mayer writes, ‘When President Obama, in a 2008 speech, described the science on global warming as “beyond dispute,” the Cato Institute took out a full-page ad in the Times to contradict him. Cato’s resident scholars have relentlessly criticized political attempts to stop global warming as expensive, ineffective, and unnecessary. Ed Crane, the Cato Institute’s founder and president, told [Mayer] that “global-warming theories give the government more control of the economy.” ‘
  • K Street refers to a major street in Washington, D.C. where major think tanks, lobbyists and advocacy groups are located.
  • with recent developments as the Citizens United case where corporations are now ‘persons’ and have no caps in political contributions, the Koch brothers are ever better-positioned to take down their perceived big, bad government and carry out their ideological agenda as sketched in Mayer’s piece
  • with much important news around the world jostling for our attention – earthquake in Japan, Middle East revolutions – the passing of an anti-union bill (which finally happened today, for better or for worse) in an American state is unlikely to make a headline able to compete with natural disasters and revolutions. Then, to quote Wisconsin Governor Scott Walker during that prank call conversation, “Sooner or later the media stops finding it [the teacher protests] interesting.”
  • What remains more puzzling for me is why the American public seems to buy into the Koch-funded libertarian rhetoric. Mayer writes, ‘ “Income inequality in America is greater than it has been since the nineteen-twenties, and since the seventies the tax rates of the wealthiest have fallen more than those of the middle class. Yet the brothers’ message has evidently resonated with voters: a recent poll found that fifty-five per cent of Americans agreed that Obama is a socialist.” I suppose that not knowing who is funding the political rhetoric makes it easier for the public to imbibe it.
Pergolas Adelaide

Pergolas for Quality Home Improvement - 1 views

I wanted to enhance my home from the outside. It seemed that my garden needed something additional to make it perfectly attractive. I was thinking of putting up a pergola in my front yard, yet, I d...

Pergolas Adelaide

started by Pergolas Adelaide on 04 Oct 11 no follow-up yet
Weiye Loh

The Great Organ Bazaar - Susanne Lundin - Project Syndicate - 0 views

  • All of this Internet activity is but the tip of the iceberg of a new and growing global human-tissue economy. Indeed, the World Health Organization (WHO) has estimated that about 10% of organ transplants around the world stem from purely commercial transactions.
  • Trade in organs follows a clear, geographically linked pattern: people from rich countries buy the organs, and people in poor countries sell them. In my research on organ trafficking, I have entered some of these shadow markets, where body parts from the poor, war victims, and prisoners are commodities, bought or stolen for transplant into affluent ill people.
  • Organ trafficking depends on several factors. One is people in distress. They are economically or socially disadvantaged, or live in war-torn societies with prevalent crime and a thriving black market. On the demand side are people who are in danger of dying unless they receive an organ transplant. Additionally, there are organ brokers who arrange the deals between sellers and buyers.
  • ...2 more annotations...
  • Trade in humans and their bodies is not a new phenomenon, but today’s businesses are historically unique, because they require advanced biomedicine, as well as ideas and values that enhance the trade in organs. Western medicine starts from the view that human illness and death are failures to be combated. It is within this conceptual climate – the dream of the regenerative body – that transplantation technology develops and demand for biological replacement parts grows.
  • In an era of transplants on demand, there is no way around this dilemma. The biological imperatives that guide the priority system of transplant waiting lists are easily transformed into economic values. As always where demand exceeds supply, people may not accept waiting their turn – and other countries and other peoples’ bodies give them the alternative they seek.
  •  
    The Web site 88DB.com Philippines is an active online portal that allows service providers and consumers to find and interact with each other. Naoval, an Indonesian man with "AB blood type, no drugs and no alcohol," wants to sell his kidney. Another man says, "I am a Filipino. I am willing to sell my kidney for my wife. She has breast cancer and I can't afford her medications." Then there is Enrique, who is "willing to donate my kidney for an exchange. 21 years old and healthy." Other offers of this type could, just a few years ago, be found at www.liver4you.org, which promised kidneys for $80,000-$110,000. The costs of the operation, including the fees of the surgeons - licensed in the United States, Great Britain, or the Philippines - would be included in the price.
Weiye Loh

The internet: is it changing the way we think? | Technology | The Observer - 0 views

  • American magazine the Atlantic lobs an intellectual grenade into our culture. In the summer of 1945, for example, it published an essay by the Massachusetts Institute of Technology (MIT) engineer Vannevar Bush entitled "As We May Think". It turned out to be the blueprint for what eventually emerged as the world wide web. Two summers ago, the Atlantic published an essay by Nicholas Carr, one of the blogosphere's most prominent (and thoughtful) contrarians, under the headline "Is Google Making Us Stupid?".
  • Carr wrote, "I've had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn't going – so far as I can tell – but it's changing. I'm not thinking the way I used to think. I can feel it most strongly when I'm reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument and I'd spend hours strolling through long stretches of prose. That's rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I'm always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle."
  • Carr's target was not really the world's leading search engine, but the impact that ubiquitous, always-on networking is having on our cognitive processes. His argument was that our deepening dependence on networking technology is indeed changing not only the way we think, but also the structure of our brains.
  • ...9 more annotations...
  • Carr's article touched a nerve and has provoked a lively, ongoing debate on the net and in print (he has now expanded it into a book, The Shallows: What the Internet Is Doing to Our Brains). This is partly because he's an engaging writer who has vividly articulated the unease that many adults feel about the way their modi operandi have changed in response to ubiquitous networking.
  • Who bothers to write down or memorise detailed information any more, for example, when they know that Google will always retrieve it if it's needed again? The web has become, in a way, a global prosthesis for our collective memory.
  • easy to dismiss Carr's concern as just the latest episode of the moral panic that always accompanies the arrival of a new communications technology. People fretted about printing, photography, the telephone and television in analogous ways. It even bothered Plato, who argued that the technology of writing would destroy the art of remembering.
  • many commentators who accept the thrust of his argument seem not only untroubled by its far-reaching implications but are positively enthusiastic about them. When the Pew Research Centre's Internet & American Life project asked its panel of more than 370 internet experts for their reaction, 81% of them agreed with the proposition that "people's use of the internet has enhanced human intelligence".
  • As a writer, thinker, researcher and teacher, what I can attest to is that the internet is changing our habits of thinking, which isn't the same thing as changing our brains. The brain is like any other muscle – if you don't stretch it, it gets both stiff and flabby. But if you exercise it regularly, and cross-train, your brain will be flexible, quick, strong and versatile.
  • he internet is analogous to a weight-training machine for the brain, as compared with the free weights provided by libraries and books. Each method has its advantage, but used properly one works you harder. Weight machines are directive and enabling: they encourage you to think you've worked hard without necessarily challenging yourself. The internet can be the same: it often tells us what we think we know, spreading misinformation and nonsense while it's at it. It can substitute surface for depth, imitation for originality, and its passion for recycling would surpass the most committed environmentalist.
  • I've seen students' thinking habits change dramatically: if information is not immediately available via a Google search, students are often stymied. But of course what a Google search provides is not the best, wisest or most accurate answer, but the most popular one.
  • But knowledge is not the same thing as information, and there is no question to my mind that the access to raw information provided by the internet is unparalleled and democratising. Admittance to elite private university libraries and archives is no longer required, as they increasingly digitise their archives. We've all read the jeremiads that the internet sounds the death knell of reading, but people read online constantly – we just call it surfing now. What they are reading is changing, often for the worse; but it is also true that the internet increasingly provides a treasure trove of rare books, documents and images, and as long as we have free access to it, then the internet can certainly be a force for education and wisdom, and not just for lies, damned lies, and false statistics.
  • In the end, the medium is not the message, and the internet is just a medium, a repository and an archive. Its greatest virtue is also its greatest weakness: it is unselective. This means that it is undiscriminating, in both senses of the word. It is indiscriminate in its principles of inclusion: anything at all can get into it. But it also – at least so far – doesn't discriminate against anyone with access to it. This is changing rapidly, of course, as corporations and governments seek to exert control over it. Knowledge may not be the same thing as power, but it is unquestionably a means to power. The question is, will we use the internet's power for good, or for evil? The jury is very much out. The internet itself is disinterested: but what we use it for is not.
  •  
    The internet: is it changing the way we think? American writer Nicholas Carr's claim that the internet is not only shaping our lives but physically altering our brains has sparked a lively and ongoing debate, says John Naughton. Below, a selection of writers and experts offer their opinion
Weiye Loh

Your Move: The Maze of Free Will - Opinionator Blog - NYTimes.com - 0 views

  • According to the Basic Argument, it makes no difference whether determinism is true or false. We can’t be ultimately morally responsible either way.
  • It may be that we stand condemned by Nietzsche: The causa sui is the best self-contradiction that has been conceived so far. It is a sort of rape and perversion of logic. But the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense. The desire for “freedom of the will” in the superlative metaphysical sense, which still holds sway, unfortunately, in the minds of the half-educated; the desire to bear the entire and ultimate responsibility for one’s actions oneself, and to absolve God, the world, ancestors, chance, and society involves nothing less than to be precisely this causa sui and, with more than Baron Münchhausen’s audacity, to pull oneself up into existence by the hair, out of the swamps of nothingness … (“Beyond Good and Evil,” 1886).
  • the novelist Ian McEwan, who wrote to me: “I see no necessary disjunction disjunction between having no free will (those arguments seem watertight) and assuming moral responsibility for myself. The point is ownership. I own my past, my beginnings, my perceptions. And just as I will make myself responsible if my dog or child bites someone, or my car rolls backwards down a hill and causes damage, so I take on full accountability for the little ship of my being, even if I do not have control of its course. It is this sense of being the possessor of a consciousness that makes us feel responsible for it.”
  • ...2 more annotations...
  • Choice, free or coerced, is neither a sufficient nor a necessary condition for responsibility.
  • All that is required to be responsible for an event is to be in the causal chain leading to an event.
  •  
    July 22, 2010, 4:15 PM Your Move: The Maze of Free Will By GALEN STRAWSON
1 - 20 of 271 Next › Last »
Showing 20 items per page