Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Credit

Rss Feed Group items tagged

Weiye Loh

Card fraud: Banks not doing enough - 0 views

  • Customers cannot be faulted for negligence by merchants to verify signatures on credit cards
  • Customers cannot be faulted for negligence by merchants to verify signatures on credit cards, or for the banks' failure to implement an effective foolproof secondary security mechanism to protect cardholders.
  •  
    Contrast this case in Singapore to other countries like the United States or Malaysia that limits the liability of the consumers of such cases to a specific amount - which policy is better? On another note, I have always been intrigued by the fact that organizations, while being infinitely more powerful, are regarded as individuals with individual rights legally. What does this have to say about the identity of organizations?
  •  
    The issue of responsibility was heavily debated and the parties identified are 1. the credit card owners, 2. the banks, 3. the retailers. 4. government bodies e.g. MAS, CASE on their regulations and policies. Which party do you all think should shoulder the moral obligations of owning the technology of cashless payment? How then should this translate to the laws and enforcement?
  •  
    The case came to light when a certain Mdm Tan Shock Ling's credit cards got stolen. Within an hour, the fraudsters used her credit cards to chock up bills amounting to $17k. She was only notified of the purchases when a bank called her to confirm if she has just purchased a rolex watch using one of her credit card. The banks requested her to pay back the bills because they will only cover payments made after she has reported the lost of her credit cards. There were a few articles regarding the issue, with Newpaper sending their reporters (Chinese women) out shopping with an Indian man's credit card. Their investigative journalism showed that retailers are generally lax in their verification of the purchaser's identity vis-a-vis the name and signature.
Weiye Loh

China used prisoners in lucrative internet gaming work | World news | guardian.co.uk - 0 views

  • "Prison bosses made more money forcing inmates to play games than they do forcing people to do manual labour," Liu told the Guardian. "There were 300 prisoners forced to play games. We worked 12-hour shifts in the camp. I heard them say they could earn 5,000-6,000rmb [£470-570] a day. We didn't see any of the money. The computers were never turned off."
  • "If I couldn't complete my work quota, they would punish me physically. They would make me stand with my hands raised in the air and after I returned to my dormitory they would beat me with plastic pipes. We kept playing until we could barely see things," he said.
  • "gold farming", the practice of building up credits and online value through the monotonous repetition of basic tasks in online games such as World of Warcraft. The trade in virtual assets is very real, and outside the control of the games' makers. Millions of gamers around the world are prepared to pay real money for such online credits, which they can use to progress in the online games.The trading of virtual currencies in multiplayer games has become so rampant in China that it is increasingly difficult to regulate. In April, the Sichuan provincial government in central China launched a court case against a gamer who stole credits online worth about 3000rmb.
  • ...2 more annotations...
  • lack of regulations has meant that even prisoners can be exploited in this virtual world for profit.
  • The emergence of gold farming as a business in China – whether in prisons or sweatshops could raise new questions over the exporting of goods real or virtual from the country."Prison labour is still very widespread – it's just that goods travel a much more complex route to come to the US these days. And it is not illegal to export prison goods to Europe, said Nicole Kempton from the Laogai foundation, a Washington-based group which opposes the forced labour camp system in China.
Weiye Loh

It's appalling - 0 views

  •  
    Credit Card fraud. CASE's reply.
Weiye Loh

Credit card stolen? Mind the pitfalls - 0 views

  •  
    More on credit card fraud
Weiye Loh

More credit card fraud if consumers less liable? - 0 views

  •  
    Same case on credit card fraud
Weiye Loh

The Matthew Effect § SEEDMAGAZINE.COM - 0 views

  • For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. —Matthew 25:29
  • Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded
  • Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit. 
  • ...7 more annotations...
  • Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.
  • Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.
  • How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.
  • Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.
  • what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.
  • We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.
  • Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.
  •  
    WHEN IT COMES TO SCIENTIFIC PUBLISHING AND FAME, THE RICH GET RICHER AND THE POOR GET POORER. HOW CAN WE BREAK THIS FEEDBACK LOOP?
Weiye Loh

homunculus: I can see clearly now - 0 views

  • Here’s a little piece I wrote for Nature news. To truly appreciate this stuff you need to take a look at the slideshow. There will be a great deal more on early microscopy in my next book, probably called Curiosity and scheduled for next year.
  • The first microscopes were a lot better than they are given credit for. That’s the claim of microscopist Brian Ford, based at Cambridge University and a specialist in the history and development of these instruments.
  • Ford says it is often suggested that the microscopes used by the earliest pioneers in the seventeenth century, such as Robert Hooke and Antony van Leeuwenhoek, gave only very blurred images of structures such as cells and micro-organisms. Hooke was the first to record cells, seen in thin slices of cork, while Leeuwenhoek described tiny ‘animalcules’, invisible to the naked eye, in rain water in 1676. The implication is that these breakthroughs in microscopic biology involved more than a little guesswork and invention. But Ford has looked again at the capabilities of some of Leeuwenhoek’s microscopes, and says ‘the results were breathtaking’. ‘The images were comparable with those you would obtain from a modern light microscope’, he adds in an account of his experiments in Microscopy and Analysis [1].
  • ...5 more annotations...
  • The poor impression of the seventeenth-century instruments, says Ford, is due to bad technique in modern reconstructions. In contrast to the hazy images shown in some museums and television documentaries, careful attention to such factors as lighting can produce micrographs of startling clarity using original microscopes or modern replicas.
  • Ford was able to make some of these improvements when he was granted access to one of Leeuwenhoek’s original microscopes owned by the Utrecht University Museum in the Netherlands. Leeuwenhoek made his own instruments, which had only a single lens made from a tiny bead of glass mounted in a metal frame. These simple microscopes were harder to make and to use than the more familiar two-lens compound microscope, but offered greater resolution.
  • Hooke popularized microscopy in his 1665 masterpiece Micrographia, which included stunning engravings of fleas, mites and the compound eyes of flies. The diarist Samuel Pepys judged it ‘the most ingenious book that I ever read in my life’. Ford’s findings show that Hooke was not, as some have imagined, embellishing his drawings from imagination, but should genuinely have been able to see such things as the tiny hairs on the flea’s legs.
  • Even Hooke was temporarily foxed, however, when he was given the duty of reproducing the results described by Leeuwenhoek, a linen merchant of Delft, in a letter to the Royal Society. It took him over a year before he could see these animalcules, whereupon he wrote that ‘I was very much surprised at this so wonderful a spectacle, having never seen any living creature comparable to these for smallness.’ ‘The abilities of those pioneer microscopists were so much greater than has been recognized’ says Ford. He attributes this misconception to the fact that ‘no longer is microscopy properly taught.’
  • Reference1. Ford, B. J. Microsc. Anal. March 2011 (in press).
  •  
    The first microscopes were a lot better than they are given credit for.
Weiye Loh

Response to Guardian's Article on Singapore Elections | the kent ridge common - 0 views

  • Further, grumblings on Facebook accounts are hardly ‘anonymous’. Lastly, how anonymous can bloggers be, when every now and then a racist blogger gets arrested by the state? Think about it. These sorts of cases prove that the state does screen, survey and monitor the online community, and as all of us know there are many vehement anti-PAP comments and articles, much of which are outright slander and defamation.
  • Yet at the end of the day, it is the racist blogger, not the anti-government or anti-PAP blogger that gets arrested. The Singaporean model is a much more complex and sophisticated phenomenon than this Guardian writer gives it credit.
  • Why did this Guardian writer, anyway, pander to a favourite Western stereotype of that “far-off Asian undemocratic, repressive regime”? Is she really in Singapore as the Guardian claims? (“Kate Hodal in Singapore” is written at the top) Can the Guardian be anymore predictable and trite?
  • ...1 more annotation...
  • Can any Singaporean honestly say the she/he can conceive of a fellow Singaporean setting himself or herself on fire along Orchard Road or Shenton Way, as a result of desperate economic pressures or financial constraints? Can we even fathom the social and economic pressures that mobilized a whole people to protest and overthrow a corrupt, US-backed regime? (that is, not during elections time) Singapore has real problems, the People’s Action Party has its real problems, and there is indeed much room for improvement. Yet such irresponsible reporting by one of the esteemed newspapers from the UK is utterly disappointing, not constructive in the least sense, and utterly misrepresents our political situation (and may potentially provoke more irrationality in our society, leading people to ‘believe’ their affinity with their Arab peers which leads to more radicalism).
  •  
    Further, grumblings on Facebook accounts are hardly 'anonymous'. Lastly, how anonymous can bloggers be, when every now and then a racist blogger gets arrested by the state? Think about it. These sorts of cases prove that the state does screen, survey and monitor the online community, and as all of us know there are many vehement anti-PAP comments and articles, much of which are outright slander and defamation. Yet at the end of the day, it is the racist blogger, not the anti-government or anti-PAP blogger that gets arrested. The Singaporean model is a much more complex and sophisticated phenomenon than this Guardian writer gives it credit.
Jiamin Lin

Firms allowed to share private data - 0 views

  •  
    Companies who request for their customer's private information may in turn distribute these confidential particulars to others. As such, cases of fraud and identity theft have surfaced, with fraudsters using these distributed identities to apply for loans or credit cards. Unlike other countries, no privacy law to safeguard an individual's data against unauthorized commercial use has been put in place. As a result, fraudsters are able to ride on this loophole. Ethical Question: Is it right for companies to request for their customer's private information for certain reasons? Is it even fair that they distribute these information to third parties, perhaps as a way to make money? Problem: I think the main problem is that there isn't a law in Singapore that safeguards an individual's data against unauthorized commercial use. Even though the Model Data Protection Code scheme tries to do the above, it is after all, still a voluntary scheme. Companies can opt to adopt the scheme, but whether they choose to apply it regularly, is another issue. As long as a privacy law is not in place, this issue will continue to recur in Singapore.
Chen Guo Lim

Anti plagiarism is (un)ethical - 20 views

I think there is a need to investigate the motivation behind using these softwares. Suppose a writer has recently come across an article that seemingly have plagiarised, thus using the software to ...

Turnitin plagiarism

Weiye Loh

'The Social Network': A Review Of Aaron Sorkin's Film About Facebook And Mark Zuckerber... - 0 views

  • What is important in Zuckerberg’s story is not that he’s a boy genius. He plainly is, but many are. It’s not that he’s a socially clumsy (relative to the Harvard elite) boy genius. Every one of them is. And it’s not that he invented an amazing product through hard work and insight that millions love. The history of American entrepreneurism is just that history, told with different technologies at different times and places.
  • what’s important here is that Zuckerberg’s genius could be embraced by half-a-billion people within six years of its first being launched, without (and here is the critical bit) asking permission of anyone. The real story is not the invention. It is the platform that makes the invention sing. Zuckerberg didn’t invent that platform. He was a hacker (a term of praise) who built for it. And as much as Zuckerberg deserves endless respect from every decent soul for his success, the real hero in this story doesn’t even get a credit. It’s something Sorkin doesn’t even notice.
  • Zuckerberg faced no such barrier. For less than $1,000, he could get his idea onto the Internet. He needed no permission from the network provider. He needed no clearance from Harvard to offer it to Harvard students. Neither with Yale, or Princeton, or Stanford. Nor with every other community he invited in. Because the platform of the Internet is open and free, or in the language of the day, because it is a “neutral network,” a billion Mark Zuckerbergs have the opportunity to invent for the platform. And though there are crucial partners who are essential to bring the product to market, the cost of proving viability on this platform has dropped dramatically. You don’t even have to possess Zuckerberg’s technical genius to develop your own idea for the Internet today.
    • Weiye Loh
       
      What a shallow techno-utopianist view...
  • ...2 more annotations...
  • that is tragedy because just at the moment when we celebrate the product of these two wonders—Zuckerberg and the Internet—working together, policymakers are conspiring ferociously with old world powers to remove the conditions for this success. As “network neutrality” gets bargained away—to add insult to injury, by an administration that was elected with the promise to defend it—the opportunities for the Zuckerbergs of tomorrow will shrink. And as they do, we will return more to the world where success depends upon permission. And privilege. And insiders. And where fewer turn their souls to inventing the next great idea.
  • Zuckerberg is a rightful hero of our time. I want my kids to admire him. To his credit, Sorkin gives him the only lines of true insight in the film: In response to the twins’ lawsuit, he asks, does “a guy who makes a really good chair owe money to anyone who ever made a chair?” And to his partner who signed away his ownership in Facebook: “You’re gonna blame me because you were the business head of the company and you made a bad business deal with your own company?” Friends who know Zuckerberg say such insight is common. No doubt his handlers are panicked that the film will tarnish the brand. He should listen less to these handlers. As I looked around at the packed theater of teens and twenty-somethings, there was no doubt who was in the right, however geeky and clumsy and sad. That generation will judge this new world. If, that is, we allow that new world to continue to flourish.
  •  
    Page 2
Weiye Loh

BBC News - Graduates - the new measure of power - 0 views

  • There are more universities operating in other countries, recruiting students from overseas, setting up partnerships, providing online degrees and teaching in other languages than ever before. Capturing the moment: South Korea has turned itself into a global player in higher education Chinese students are taking degrees taught in English in Finnish universities; the Sorbonne is awarding French degrees in Abu Dhabi; US universities are opening in China and South Korean universities are switching teaching to English so they can compete with everyone else. It's like one of those board games where all the players are trying to move on to everyone else's squares. It's not simply a case of western universities looking for new markets. Many countries in the Middle East and Asia are deliberately seeking overseas universities, as a way of fast-forwarding a research base.
  • "There's a world view that universities, and the most talented people in universities, will operate beyond sovereignty. "Much like in the renaissance in Europe, when the talent class and the creative class travelled among the great idea capitals, so in the 21st century, the people who carry the ideas that will shape the future will travel among the capitals.
  • "But instead of old European names it will be names like Shanghai and Abu Dhabi and London and New York. Those universities will be populated by those high-talent people." New York University, one of the biggest private universities in the US, has campuses in New York and Abu Dhabi, with plans for another in Shanghai. It also has a further 16 academic centres around the world. Mr Sexton sets out a different kind of map of the world, in which universities, with bases in several cities, become the hubs for the economies of the future, "magnetising talent" and providing the ideas and energy to drive economic innovation.
  • ...6 more annotations...
  • Universities are also being used as flag carriers for national economic ambitions - driving forward modernisation plans. For some it's been a spectacularly fast rise. According to the OECD, in the 1960s South Korea had a similar national wealth to Afghanistan. Now it tops international education league tables and has some of the highest-rated universities in the world. The Pohang University of Science and Technology in South Korea was only founded in 1986 - and is now in the top 30 of the Times Higher's global league table, elbowing past many ancient and venerable institutions. It also wants to compete on an international stage so the university has decided that all its graduate programmes should be taught in English rather than Korean.
  • governments want to use universities to upgrade their workforce and develop hi-tech industries.
  • "Universities are being seen as a key to the new economies, they're trying to grow the knowledge economy by building a base in universities," says Professor Altbach. Families, from rural China to eastern Europe, are also seeing university as a way of helping their children to get higher-paid jobs. A growing middle-class in India is pushing an expansion in places. Universities also stand to gain from recruiting overseas. "Universities in the rich countries are making big bucks," he says. This international trade is worth at least $50 billion a year, he estimates, the lion's share currently being claimed by the US.
  • Technology, much of it hatched on university campuses, is also changing higher education and blurring national boundaries.
  • It raises many questions too. What are the expectations of this Facebook generation? They might have degrees and be able to see what is happening on the other side of the world, but will there be enough jobs to match their ambitions? Who is going to pay for such an expanded university system? And what about those who will struggle to afford a place?
  • The success of the US system is not just about funding, says Professor Altbach. It's also because it's well run and research is effectively organised. "Of course there are lots of lousy institutions in the US, but overall the system works well." Continue reading the main story “Start Quote Developed economies are already highly dependent on universities and if anything that reliance will increase” End Quote David Willetts UK universities minister The status of the US system has been bolstered by the link between its university research and developing hi-tech industries. Icons of the internet-age such Google and Facebook grew out of US campuses.
Weiye Loh

Art and Attribution: Who is an "Artist"? » Sociological Images - 0 views

  • NPR short on artist Liu Bolin.  Bolin, we are told, “has a habit of painting himself” so as to disappear into his surroundings.  The idea is to illustrate the way in which humans are increasingly “merged” with their environment.
  • So how does he do it?  Well, it turns out that he doesn’t.  Instead, “assistants” spend hours painting him.  And someone else photographs him.  He just stands there.  Watch how the process is described in this one minute clip:

    So what makes an artist?

  • One might argue that it was Bolin who had the idea to illustrate the contemporary human condition in this way. That the “art” in this work is really in his inspiration, while the “work” in this art is what is being done by the assistants. Yet clearly there is “art” in their work, too, given that they are to be credited for creating the eerie illusions with paint. Yet it is Bolin who is named as the artist; his assistants aren’t named at all.  What is it about the art world — or our world more generally — that makes this asymmetrical attribution go unnoticed so much of the time?
  • ...3 more annotations...
  • historically it probably goes back to the master/apprentice, atelier setup of the Renaissance era and earlier. And then with the cult of the “genius” that surrounds artists nowadays, it’s no wonder that assistants would be invisible.
  • In my art history classes about Renaissance and other classical painting, we talked about how often the “artist” would be the master painter, but had a lot of help from one or more assistants when executing the painting. Every now and then one of those assistants/apprentices would be considered good enough to go off and be recognized as an artist on his own, but in general, those guys were pretty nameless despite sometimes decades of service.
  • similar to the way that businesses and organizations have public faces – CEOs, etc. – and the efforts of everyone who works for them are often credited to the CEOs themselves, for better or for worse, whether they deserve the accolades or not. There’s some asymmetrical attribution for you!
Weiye Loh

New Service Adds Your Drunken Facebook Photos To Employer Background Checks, For Up To ... - 0 views

  •  
    The FTC has given thumbs up to a company, Social Intelligence Corp., selling a new kind of employee background check to employers. This one scours the internet for your posts and pictures to social media sites and creates a file of all the dumb stuff you ever uploaded online. For instance, this sample they provided was flagged for "Demonstrating potentially violent behavior" because of "flagrant display of weapons or bombs." The FTC said that the file, which will last for up to seven years, does not violate the Fair Credit Reporting Act. The company also says that info in your file will be updated when you remove pictures from the social media sites. Forbes reports, "new employers who run searches through Social Intelligence won't have access to the materials if they are completely removed from the Internet."
Weiye Loh

Balderdash - 0 views

  • A letter Paul wrote to complain about the "The Dead Sea Scrolls" exhibition at the Arts House:To Ms. Amira Osman (Marketing and Communications Manager),cc.Colin Goh, General Manager,Florence Lee, Depury General ManagerDear Ms. Osman,I visited the Dead Sea Scrolls “exhibition” today with my wife. Thinking that it was from a legitimate scholarly institute or (how naïve of me!) the Israel Antiquities Authority, I was looking forward to a day of education and entertainment.Yet when I got it, much of the exhibition (and booklets) merely espouses an evangelical (fundamentalist) view of the Bible – there are booklets on the inerrancy of the Bible, on how archaeology has proven the Bible to be true etc.Apart from these there are many blatant misrepresentations of the state of archaeology and mainstream biblical scholarship:a) There was initial screening upon entry of a 5-10 minute pseudo-documentary on the Dead Sea Scrolls. A presenter (can’t remember the name) was described as a “biblical archaeologist” – a term that no serious archaeologist working in the Levant would apply to him or herself. (Some prefer the term “Syro-Palestinian archaeologist” but almost all reject the term “biblical archaeologist”). See the book by Thomas W. Davis, “Shifting Sands: The Rise and Fall of Biblical Archaeology”, Oxford, New York 2004. Davis is an actual archaeologist working in the field and the book tells why the term “Biblical archaeologist” is not considered a legitimate term by serious archaeologist.b) In the same presentation, the presenter made the erroneous statement that the entire old testament was translated into Greek in the third century BCE. This is a mistake – only the Pentateuch (the first five books of the Old Testament) was translated during that time. Note that this ‘error’ is not inadvertent but is a familiar claim by evangelical apologists who try to argue for an early date of all the books of the Old testament - if all the books have been translated by the third century BCE obviously these books must all have been written before then! This flies against modern scholarship which show that some books in the Old Testament such as the Book of Daniel was written only in the second century BCE]The actual state of scholarship on the Septuagint [The Greek translation of the Bible] is accurately given in the book by Ernst Würthwein, “The Text of the Old Testament” – Eerdmans 1988 pp.52-54c) Perhaps the most blatant error was one which claimed that the “Magdalene fragments” – which contains the 26th chapter of the Gospel of Matthew is dated to 50 AD!!! Scholars are unanimous in dating these fragments to 200 AD. The only ‘scholar’ cited that dated these fragments to 50 AD was the German papyrologist Carsten Thiede – a well know fundamentalist. This is what Burton Mack (a critical – legitimate – NT scholar) has to say about Thiede’s eccentric dating “From a critical scholar's point of view, Thiede's proposal is an example of just how desperate the Christian imagination can become in the quest to argue for the literal facticity of the Christian gospels” [Mack, Burton L., “Who Wrote the New Testament?:The Making of the Christian Myth” HarperCollins, San Francisco 1995] Yet the dating of 50 AD is presented as though it is a scholarly consensus position!In fact the last point was so blatant that I confronted the exhibitors. (Tak Boleh Tahan!!) One American exhibitor told me that “Yes, it could have been worded differently, but then we would have to change the whole display” (!!). When I told him that this was not a typo but a blatant attempt to deceive, he mentioned that Theide’s views are supported by “The Dallas Theological Seminary” – another well know evangelical institute!I have no issue with the religious strengthening their faith by having their own internal exhibitions on historical artifacts etc. But when it is presented to the public as a scholarly exhibition – this is quite close to being dishonest.I felt cheated of the $36 dollars I paid for the tickets and of the hour that I spent there before realizing what type of exhibition it was.I am disappointed with The Art House for show casing this without warning potential visitors of its clear religious bias.Yours sincerely,Paul TobinTo their credit, the Arts House speedily replied.
    • Weiye Loh
       
      The issue of truth is indeed so maddening. Certainly, the 'production' of truth has been widely researched and debated by scholars. Spivak for example, argued for the deconstruction by means of questioning the privilege of identity so that someone is believed to have the truth. And along the same line, albeit somewhat misunderstood I feel, It was mentioned in class that somehow people who are oppressed know better.
Weiye Loh

A geophysiologist's thoughts on geoengineering - Philosophical Transactions A - 0 views

  • The Earth is now recognized as a self-regulating system that includes a reactive biosphere; the system maintains a long-term steady-state climate and surface chemical composition favourable for life. We are perturbing the steady state by changing the land surface from mainly forests to farm land and by adding greenhouse gases and aerosol pollutants to the air. We appear to have exceeded the natural capacity to counter our perturbation and consequently the system is changing to a new and as yet unknown but probably adverse state. I suggest here that we regard the Earth as a physiological system and consider amelioration techniques, geoengineering, as comparable to nineteenth century medicine.
  • Organisms change their world locally for purely personal selfish reasons; if the advantage conferred by the ‘engineering’ is sufficiently favourable, it allows them and their environment to expand until dominant on a planetary scale.
  • Our use of fires as a biocide to clear land of natural forests and replace them with farmland was our second act of geoengineering; together these acts have led the Earth to evolve to its current state. As a consequence, most of us are now urban and our environment is an artefact of engineering.
  • ...7 more annotations...
  • Physical means of amelioration, such as changing the planetary albedo, are the subject of other papers of this theme issue and I thought it would be useful here to describe physiological methods for geoengineering. These include tree planting, the fertilization of ocean algal ecosystems with iron, the direct synthesis of food from inorganic raw materials and the production of biofuels.
  • Tree planting would seem to be a sensible way to remove CO2 naturally from the air, at least for the time it takes for the tree to reach maturity. But in practice the clearance of forests for farm land and biofuels is now proceeding so rapidly that there is little chance that tree planting could keep pace.
  • Oceans cover over 70 per cent of the Earth's surface and are uninhabited by humans. In addition, most of the ocean surface waters carry only a sparse population of photosynthetic organisms, mainly because the mineral and other nutrients in the water below the thermocline do not readily mix with the warmer surface layer. Some essential nutrients such as iron are present in suboptimal abundance even where other nutrients are present and this led to the suggestion by John Martin in a lecture in 1991 that fertilization with the trace nutrient iron would allow algal blooms to develop that would cool the Earth by removing CO2
  • The Earth system is dynamically stable but with strong feedbacks. Its behaviour resembles more the physiology of a living organism than that of the equilibrium box models of the last century
  • For almost all other ailments, there was nothing available but nostrums and comforting words. At that time, despite a well-founded science of physiology, we were still ignorant about the human body or the host–parasite relationship it had with other organisms. Wise physicians knew that letting nature take its course without intervention would often allow natural self-regulation to make the cure. They were not averse to claiming credit for their skill when this happened.
  • The alternative is the acceptance of a massive natural cull of humanity and a return to an Earth that freely regulates itself but in the hot state.
  • Global heating would not have happened but for the rapid expansion in numbers and wealth of humanity. Had we heeded Malthus's warning and kept the human population to less than one billion, we would not now be facing a torrid future. Whether or not we go for Bali or use geoengineering, the planet is likely, massively and cruelly, to cull us, in the same merciless way that we have eliminated so many species by changing their environment into one where survival is difficult.
  •  
    A geophysiologist's thoughts on geoengineering
Weiye Loh

Essay - The End of Tenure? - NYTimes.com - 0 views

  • The cost of a college education has risen, in real dollars, by 250 to 300 percent over the past three decades, far above the rate of inflation. Elite private colleges can cost more than $200,000 over four years. Total student-loan debt, at nearly $830 billion, recently surpassed total national credit card debt. Meanwhile, university presidents, who can make upward of $1 million annually, gravely intone that the $50,000 price tag doesn’t even cover the full cost of a year’s education.
  • Then your daughter reports that her history prof is a part-time adjunct, who might be making $1,500 for a semester’s work. There’s something wrong with this picture.
  • The higher-ed jeremiads of the last generation came mainly from the right. But this time, it’s the tenured radicals — or at least the tenured liberals — who are leading the charge. Hacker is a longtime contributor to The New York Review of Books and the author of the acclaimed study “Two Nations: Black and White, Separate, Hostile, Unequal,”
  • ...6 more annotations...
  • And these two books arrive at a time, unlike the early 1990s, when universities are, like many students, backed into a fiscal corner. Taylor writes of walking into a meeting one day and learning that Columbia’s endowment had dropped by “at least” 30 percent. Simply brushing off calls for reform, however strident and scattershot, may no longer be an option.
  • The labor system, for one thing, is clearly unjust. Tenured and tenure-track professors earn most of the money and benefits, but they’re a minority at the top of a pyramid. Nearly two-thirds of all college teachers are non-tenure-track adjuncts like Matt Williams, who told Hacker and Dreifus he had taught a dozen courses at two colleges in the Akron area the previous year, earning the equivalent of about $8.50 an hour by his reckoning. It is foolish that graduate programs are pumping new Ph.D.’s into a world without decent jobs for them. If some programs were phased out, teaching loads might be raised for some on the tenure track, to the benefit of undergraduate education.
  • it might well be time to think about vetoing Olympic-quality athletic ­facilities and trimming the ranks of administrators. At Williams, a small liberal arts college renowned for teaching, 70 percent of employees do something other than teach.
  • But Hacker and Dreifus go much further, all but calling for an end to the role of universities in the production of knowledge. Spin off the med schools and research institutes, they say. University presidents “should be musing about education, not angling for another center on antiterrorist technologies.” As for the humanities, let professors do research after-hours, on top of much heavier teaching schedules. “In other occupations, when people feel there is something they want to write, they do it on their own time and at their own expense,” the authors declare. But it seems doubtful that, say, “Battle Cry of Freedom,” the acclaimed Civil War history by Princeton’s James McPherson, could have been written on the weekends, or without the advance spadework of countless obscure monographs. If it is false that research invariably leads to better teaching, it is equally false to say that it never does.
  • Hacker’s home institution, the public Queens College, which has a spartan budget, commuter students and a three-or-four-course teaching load per semester. Taylor, by contrast, has spent his career on the elite end of higher education, but he is no less disillusioned. He shares Hacker and Dreifus’s concerns about overspecialized research and the unintended effects of tenure, which he believes blocks the way to fresh ideas. Taylor has backed away from some of the most incendiary proposals he made last year in a New York Times Op-Ed article, cheekily headlined “End the University as We Know It” — an article, he reports, that drew near-universal condemnation from academics and near-universal praise from everyone else. Back then, he called for the flat-out abolition of traditional departments, to be replaced by temporary, “problem-centered” programs focusing on issues like Mind, Space, Time, Life and Water. Now, he more realistically suggests the creation of cross-­disciplinary “Emerging Zones.” He thinks professors need to get over their fear of corporate partnerships and embrace efficiency-enhancing technologies.
  • It is not news that America is a land of haves and have-nots. It is news that colleges are themselves dividing into haves and have-nots; they are becoming engines of inequality. And that — not whether some professors can afford to wear Marc Jacobs — is the real scandal.
  •  
    The End of Tenure? By CHRISTOPHER SHEA Published: September 3, 2010
Weiye Loh

Facebook's 'See Friendship' Feature Raises Privacy Worries - TIME - 0 views

  • A button called "See Friendship" aggregates onto a single page all of the information that two friends share: photos both people have been tagged in, events they have attended or are planning to attend, comments they have exchanged, etc. To see this stuff, you need only be "friends" with one of the people. So let's say I've turned down an ex-boyfriend's request for friendship; he can still peruse my pictures or trace my whereabouts by viewing my interactions with our mutual pals.
  • The "See Friendship" feature was launched by Facebook developer Wayne Kao, who credited his inspiration to the joy of browsing through friends' photos. "A similarly magical experience was possible if all of the photos and posts between two friends were brought together," he wrote on the Facebook blog. "You may even see that moment when your favorite couple met at a party you all attended."
  • Barry Wellman, a University of Toronto professor who studies social networks and real-life relationships, thinks Facebook developers don't understand the fundamental difference between life online and offline. "We all live in segmented, diversified worlds. We might be juggling girlfriends, jobs or different groups of friends," he says. "But [Facebook thinks] we're in one integrated community."
  • ...1 more annotation...
  • In this era of "media convergence" — when GPS and wireless devices are colluding to make one's offline location known in the virtual world — friendship pages allow you to see an event your nonfriend has RSVP'd to or a plan he or she made with your mutual pal.
Weiye Loh

Experts claim 2006 climate report plagiarized - USATODAY.com - 0 views

  • An influential 2006 congressional report that raised questions about the validity of global warming research was partly based on material copied from textbooks, Wikipedia and the writings of one of the scientists criticized in the report, plagiarism experts say.
  • "It kind of undermines the credibility of your work criticizing others' integrity when you don't conform to the basic rules of scholarship," Virginia Tech plagiarism expert Skip Garner says.
  • Led by George Mason University statistician Edward Wegman, the 2006 report criticized the statistics and scholarship of scientists who found the last century the warmest in 1,000 years.
  • ...1 more annotation...
  • But in March, climate scientist Raymond Bradley of the University of Massachusetts asked GMU, based in Fairfax, Va., to investigate "clear plagiarism" of one of his textbooks. Bradley says he learned of the copying on the Deep Climate website and through a now year-long analysis of the Wegman report made by retired computer scientist John Mashey of Portola Valley, Calif. Mashey's analysis concludes that 35 of the report's 91 pages "are mostly plagiarized text, but often injected with errors, bias and changes of meaning." Copying others' text or ideas without crediting them violates universities' standards, according to Liz Wager of the London-based Committee on Publication Ethics.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
1 - 20 of 38 Next ›
Showing 20 items per page