Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged first

Rss Feed Group items tagged

5More

Roger Pielke Jr.'s Blog: How to Get to 80% "Clean Energy" by 2035 - 0 views

  • I have put together a quick spreadsheet to allow me to do a bit of sensitivity analysis of what it would take for the US to get to 80% "clean energy" in its electricity supply by 2035, as proposed by President Obama in his State of the Union Speech
  • 1. I started with the projections from the EIA to 2035 available here in XLS. 2. I then calculated the share of clean energy in 2011, assuming that natural gas gets a 50% credit for being clean.  That share is just under 44% (Nukes 21%, Renewable 13%, Gas 10%). 3. I then calculated how that share could be increased to 80% by 2035.
  • Here is what I found: 1. Coal pretty much has to go away.  Specifically, about 90% or more of coal energy would have to be replaced. 2. I first looked at replacing all the coal with gas, all else equal.  That gets the share of clean energy up to about 68%, a ways off of the target. 3. I then fiddled with the numbers to arrive at 80%.  One way to get there would be to increase the share of nukes to 43%, gas to 31% and renewables to 22% (Note that the EIA reference scenario -- BAU -- to 2035 has these shares at 17%, 21% and 17% respectively, for a share of 45% just about like today.)
  • ...2 more annotations...
  • Increasing nuclear power in the EIA reference scenario from a 17% to 43% share of electricity implies, in round numbers, about 300 new nuclear power plants by 2035.***  If you do not like nuclear you can substitute wind turbines or solar thermal plants (or even reductions in electricity consumption) according to the data provided in The Climate Fix, Table 4.4.  The magnitude of the task is the same size, just expressed differently.
  • One nuclear plant worth of carbon-free energy every 30 days between now and 2035.  This does not even consider electrification of some fraction of the vehicle fleet -- another of President Obama's goals -- which presumably would add a not-insignificant amount to electricity demand. Thus, I'd suggest that the President's clean energy goal is much more of the aspirational variety than a actual policy target expected to be hit precisely.
14More

Adventures in Flay-land: Scepticism versus Denialism - Delingpole Part II - 0 views

  • wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism.
  • It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre: Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
  • Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.
  • ...11 more annotations...
  • So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.
  • Delingpole makes the following claim: Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
  • So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.
  • MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study.
  • This Wikipedia article explains in better laymens terms how the MM claims are faulty.
  • It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are
  • She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC)
  • I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group 1 (the climate scientists). The science revealed by Working Group 1 is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance.
  • Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT.
  • According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus.
  • The second to last paragraph of Delingpole's article asks this: If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"
10More

Adventures in Flay-land: James Delingpole and the "Science" of Denialism - 0 views

  • Perhaps like me, you watched the BBC Two Horizons program Monday night presented by Sir Paul Nurse, president of the Royal Society and Nobel Prize winning geneticist for his discovery of the genes of cell division.
  • James. He really believes there's some kind of mainstream science "warmist" conspiracy against the brave outliers who dare to challenge the consensus. He really believes that "climategate" is a real scandal. He fails to understand that it is a common practice in statistics to splice together two or more datasets where you know that the quality of data is patchy. In the case of "climategate", researchers found that indirect temperature measurements based on tree ring widths (the tree ring temperature proxy) is consistent with other proxy methods of recording temperature from before the start of the instrumental temperature record (around 1950) but begins to show a decline in temperature after that for reasons which are unclear. Actual temperature measurements however show the opposite. The researcher at the head of the climategate affair, Phil Jones, created a graph of the temperature record to include on the cover of a report for policy makers and journalists. For this graph he simply spliced together the tree ring proxy data up until 1950 with the recorded data after that using statistical techniques to bring them into agreement. What made this seem particularly dodgy was an email intercepted by a hacker in which Jones referred to this practice as a "Mike's Nature trick", referring to a paper published by his colleague Mike Hulme Michael Mann in the journal Nature. It is however nothing out of the ordinary. Delingpole and others have talked about how this "trick" was used here to "hide the decline" revealed by the other dataset, as though this was some sort of deception. The fact that all parties were found to have behaved ethically is simply further evidence of the global warmist conspiracy. Delingpole takes it further and casts aspersions on scientific consensus and the entire peer review process.
  • When Nurse asked Delingpole the very straightforward question of whether he would be willing to trust a scientific consensus if he required treatment for cancer, he could have said "Gee, that's an interesting question. Let me think about that and why it's different."
  • ...7 more annotations...
  • Instead, he became defensive and lost his focus. Eventually he would make such regrettable statements as this one: "It is not my job to sit down and read peer-reviewed papers because I simply haven’t got the time, I haven’t got the scientific expertise… I am an interpreter of interpretation."
  • In a parallel universe where James Delingpole is not the "penis" that Ben Goldacre describes him to be, he might have said the following: Gee, that's an interesting question. Let me think about why it's different. (Thinks) Well, it seems to me that when evaluating a scientifically agreed treatment for a disease such as cancer, we have not only all the theory to peruse and the randomized and blinded trials, but also thousands if not millions of case studies where people have undergone the intervention. We have enough data to estimate a person's chances of recovery and know that on average they will do better. When discussing climate change, we really only have the one case study. Just the one earth. And it's a patient that has not undergone any intervention. The scientific consensus is therfore entirely theoretical and intangible. This makes it more difficult for the lay person such as myself to trust it.
  • Sir Paul ended the program saying "Scientists have got to get out there… if we do not do that it will be filled by others who don’t understand the science, and who may be driven by politics and ideology."
  • f proxy tracks instrumental from 1850 to 1960 but then diverges for unknown reasons, how do we know that the proxy is valid for reconstructing temperatures in periods prior to 1850?
  • This is a good question and one I'm not sure I can answer it to anyone's satisfaction. We seem to have good agreement among several forms of temperature proxy going back centuries and with direct measurements back to 1880. There is divergence in more recent years and there are several theories as to why that might be. Some possible explanations here:http://www.skepticalscience.com/Tree-ring-proxies-divergence-problem.htm
  • In the physical world we can never be absolutely certain of anything. Rene Des Cartes showed it was impossible to prove that everything he sensed wasn't manipulated by some invisible demon.
  • It is necessary to first make certain assumptions about the universe that we observe. After that, we can only go with the best theories available that allow us to make scientific progress.
16More

Himalayan glaciers not melting because of climate change, report finds - Telegraph - 0 views

  • Himalayan glaciers are actually advancing rather than retreating, claims the first major study since a controversial UN report said they would be melted within quarter of a century.
  • Researchers have discovered that contrary to popular belief half of the ice flows in the Karakoram range of the mountains are actually growing rather than shrinking.
  • The discovery adds a new twist to the row over whether global warming is causing the world's highest mountain range to lose its ice cover.
  • ...13 more annotations...
  • It further challenges claims made in a 2007 report by the UN's Intergovernmental Panel on Climate Change that the glaciers would be gone by 2035.
  • Although the head of the panel Dr Rajendra Pachauri later admitted the claim was an error gleaned from unchecked research, he maintained that global warming was melting the glaciers at "a rapid rate", threatening floods throughout north India.
  • The new study by scientists at the Universities of California and Potsdam has found that half of the glaciers in the Karakoram range, in the northwestern Himlaya, are in fact advancing and that global warming is not the deciding factor in whether a glacier survives or melts.
  • Dr Bodo Bookhagen, Dirk Scherler and Manfred Strecker studied 286 glaciers between the Hindu Kush on the Afghan-Pakistan border to Bhutan, taking in six areas.Their report, published in the journal Nature Geoscience, found the key factor affecting their advance or retreat is the amount of debris – rocks and mud – strewn on their surface, not the general nature of climate change.
  • Glaciers surrounded by high mountains and covered with more than two centimetres of debris are protected from melting.Debris-covered glaciers are common in the rugged central Himalaya, but they are almost absent in subdued landscapes on the Tibetan Plateau, where retreat rates are higher.
  • In contrast, more than 50 per cent of observed glaciers in the Karakoram region in the northwestern Himalaya are advancing or stable.
  • "Our study shows that there is no uniform response of Himalayan glaciers to climate change and highlights the importance of debris cover for understanding glacier retreat, an effect that has so far been neglected in predictions of future water availability or global sea level," the authors concluded.
  • Dr Bookhagen said their report had shown "there is no stereotypical Himalayan glacier" in contrast to the UN's climate change report which, he said, "lumps all Himalayan glaciers together."
  • Dr Pachauri, head of the Nobel prize-winning UN Intergovernmental Panel on Climate Change, has remained silent on the matter since he was forced to admit his report's claim that the Himalayan glaciers would melt by 2035 was an error and had not been sourced from a peer-reviewed scientific journal. It came from a World Wildlife Fund report.
  • this latest tawdry addition to the pathetic lies of the Reality Deniers. If you go to a proper source which quotes the full study such as:http://www.sciencedaily.com/re...you discover that the findings of this study are rather different to those portrayed here.
  • only way to consistently maintain a lie is to refuse point-blank to publish ALL the findings of a study, but to cherry-pick the bits which are consistent with the ongoing lie, while ignoring the rest.
  • Bookhagen noted that glaciers in the Karakoram region of Northwestern Himalaya are mostly stagnating. However, glaciers in the Western, Central, and Eastern Himalaya are retreating, with the highest retreat rates -- approximately 8 meters per year -- in the Western Himalayan Mountains. The authors found that half of the studied glaciers in the Karakoram region are stable or advancing, whereas about two-thirds are in retreat elsewhere throughout High Asia
  • glaciers in the steep Himalaya are not only affected by temperature and precipitation, but also by debris coverage, and have no uniform and less predictable response, explained the authors. The debris coverage may be one of the missing links to creating a more coherent picture of glacial behavior throughout all mountains. The scientists contrast this Himalayan glacial study with glaciers from the gently dipping, low-relief Tibetan Plateau that have no debris coverage. Those glaciers behave in a different way, and their frontal changes can be explained by temperature and precipitation changes.
11More

A DIY Data Manifesto | Webmonkey | Wired.com - 0 views

  • Running a server is no more difficult than starting Windows on your desktop. That’s the message Dave Winer, forefather of blogging and creator of RSS, is trying to get across with his EC2 for Poets project.
  • Winer has put together an easy-to-follow tutorial so you too can set up a Windows-based server running in the cloud. Winer uses Amazon’s EC2 service. For a few dollars a month, Winer’s tutorial can have just about anyone up and running with their own server.
  • but education and empowerment aren’t Winer’s only goals. “I think it’s important to bust the mystique of servers,” says Winer, “it’s essential if we’re going to break free of the ‘corporate blogging silos.’”
  • ...8 more annotations...
  • The corporate blogging silos Winer is thinking of are services like Twitter, Facebook and WordPress. All three have been instrumental in the growth of the web, they make it easy for anyone publish. But they also suffer denial of service attacks, government shutdowns and growing pains, centralized services like Twitter and Facebook are vulnerable. Services wrapped up in a single company are also vulnerable to market whims, Geocities is gone, FriendFeed languishes at Facebook and Yahoo is planning to sell Delicious. A centralized web is brittle web, one that can make our data, our communications tools disappear tomorrow.
  • But the web will likely never be completely free of centralized services and Winer recognizes that. Most people will still choose convenience over freedom. Twitter’s user interface is simple, easy to use and works on half a dozen devices.
  • Winer isn’t the only one who believes the future of the web will be distributed systems that aren’t controlled by any single corporation or technology platform. Microformats founder Tantek Çelik is also working on a distributed publishing system that seeks to retain all the cool features of the social web, but remove the centralized bottleneck.
  • to be free of corporate blogging silos and centralized services the web will need an army of distributed servers run by hobbyists, not just tech-savvy web admins, but ordinary people who love the web and want to experiment.
  • Winer wants to start by creating a loosely coupled, distributed microblogging service like Twitter. “I’m pretty sure we know how to create a micro-blogging community with open formats and protocols and no central point of failure,” he writes on his blog.
  • that means decoupling the act of writing from the act of publishing. The idea isn’t to create an open alternative to Twitter, it’s to remove the need to use Twitter for writing on Twitter. Instead you write with the tools of your choice and publish to your own server.
  • If everyone publishes first to their own server there’s no single point of failure. There’s no fail whale, and no company owns your data. Once the content is on your server you can then push it on to wherever you’d like — Twitter, Tumblr, WordPress of whatever the site du jour is ten years from now.
  • The glue that holds this vision together is RSS. Winer sees RSS as the ideal broadcast mechanism for the distributed web and in fact he’s already using it — Winer has an RSS feed of links that are then pushed on to Twitter.
6More

Approaching the cliffs of time - Plane Talking - 0 views

  • have you noticed how the capacity of the media to explain in lay terms such matters as quantum physics, or cosmology, is contracting faster than the universe is expanding? The more mind warping the discoveries the less opportunity there is to fit them into 30 seconds in a news cast, or 300 words in print.
  • There has been a long running conspiracy of convenience between science reporters and the science being reported to leave out inconvenient time and space consuming explanations, and go for the punch line that best suits the use of the media to lobby for more project funding.
  • Almost every space story I have written over 50 years has been about projects claiming to ‘discover the origins of the solar system/life on earth/life on Mars/discover the origins of the universe, or recover parts of things like comets because they are as old as the sun, except that we have discovered they aren’t ancient at all.’ None of them were ever designed to achieved those goals. They were brilliant projects, brilliantly misrepresented by the scientists and the reporters because an accurate story would have been incomprehensible to 99.9% of readers or viewers.
  • ...3 more annotations...
  • this push to abbreviate and banalify the more esoteric but truly intriguing mysteries of the universe has lurched close to parody yet failed to be as thoughtfully funny as Douglas Adams was with the Hitchhiker’s Guide to the Galaxy
  • Our most powerful telescopes are approaching what Columbia physicist and mathematician Brian Greene recently called the cliffs of time,  beyond which an infinitely large yet progressively emptier universe lies forever invisible to us and vice versa, since to that universe, we also lie beyond the cliffs of time. This capturing of images from the start of time is being done by finding incredibly faint and old light using computing power and forensic techniques not even devised when Hubble was assembled on earth. In this instance Hubble has found the faint image of an object that emitted light a mere 480 million years after the ‘big bang’ 13.7 billion years ago. It is, thus, nearly as old as time itself.
  • The conspiracy of over simplification has until now kept the really gnarly principles involved in big bang theory out of the general media because nothing short of a first class degree in theoretical and practical physics is going to suffice for a reasonable overview. Plus a 100,000 word article with a few thousand diagrams.
7More

Rationally Speaking: Response to Jonathan Haidt's response, on the academy's liberal bias - 0 views

  • Dear Prof. Haidt,You understandably got upset by my harsh criticism of your recent claims about the mechanisms behind the alleged anti-conservative bias that apparently so permeates the modern academy. I find it amusing that you simply assumed I had not looked at your talk and was therefore speaking without reason. Yet, I have indeed looked at it (it is currently published at Edge, a non-peer reviewed webzine), and found that it simply doesn’t add much to the substance (such as it is) of Tierney’s summary.
  • Yes, you do acknowledge that there may be multiple reasons for the imbalance between the number of conservative and liberal leaning academics, but then you go on to characterize the academy, at least in your field, as a tribe having a serious identity issue, with no data whatsoever to back up your preferred subset of causal explanations for the purported problem.
  • your talk is simply an extended op-ed piece, which starts out with a summary of your findings about the different moral outlooks of conservatives and liberals (which I have criticized elsewhere on this blog), and then proceeds to build a flimsy case based on a couple of anecdotes and some badly flawed data.
  • ...4 more annotations...
  • For instance, slide 23 shows a Google search for “liberal social psychologist,” highlighting the fact that one gets a whopping 2,740 results (which, actually, by Google standards is puny; a search under my own name yields 145,000, and I ain’t no Lady Gaga). You then compared this search to one for “conservative social psychologist” and get only three entries.
  • First of all, if Google searches are the main tool of social psychology these days, I fear for the entire field. Second, I actually re-did your searches — at the prompting of one of my readers — and came up with quite different results. As the photo here shows, if you actually bother to scroll through the initial Google search for “liberal social psychologist” you will find that there are in fact only 24 results, to be compared to 10 (not 3) if you search for “conservative social psychologist.” Oops. From this scant data I would simply conclude that political orientation isn’t a big deal in social psychology.
  • Your talk continues with some pretty vigorous hand-waving: “We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values.” Right, except that I would like to see a systematic survey of exactly how the lack of conservative peer review has affected the quality of academic publications. Oh, wait, it hasn’t, at least according to what you yourself say in the next sentence: “The great majority of work in social psychology is excellent, and is unaffected by these problems.” I wonder how you know this, and why — if true — you then think that there is a problem. Philosophers call this an inherent contradiction, it’s a common example of bad argument.
  • Finally, let me get to your outrage at the fact that I have allegedly accused you of academic misconduct and lying. I have done no such thing, and you really ought (in the ethical sense) to be careful when throwing those words around. I have simply raised the logical possibility that you (and Tierney) have an agenda, a possibility based on reading several of the things both you and Tierney have written of late. As a psychologist, I’m sure you are aware that biases can be unconscious, and therefore need not imply that the person in question is lying or engaging in any form of purposeful misconduct. Or were you implying in your own talk that your colleagues’ bias was conscious? Because if so, you have just accused an entire profession of misconduct.
9More

News Clips: Pinning down acupuncture: It's a placebo - 0 views

  • some doctors seem to have embraced even disproven remedies. Take, for instance, a review of acupuncture research that appeared last July in the New England Journal of Medicine. This highly respected journal is one of the most widely read by doctors across specialities.In Acupuncture For Chronic Low Back Pain, the authors reviewed clinical trials done to assess if acupuncture actually helps in chronic low back pain. The most important meta-analysis available was a 2008 study involving 6,359 patients, which 'showed that real acupuncture treatments were no more effective than sham acupuncture treatments'.
  • The authors then editorialised: 'There was nevertheless evidence that both real acupuncture and sham acupuncture were more effective than no treatment and that acupuncture can be a useful supplement to other forms of conventional therapy for low back pain.'
  • First, they admit that pooled clinical trials of the best sort show that real acupuncture does no better than sham acupuncture. This should mean that acupuncture does not work - full stop. But then they say that both sham and real acupuncture work as well as the other and thus is useful. Translation: Please use acupuncture as a placebo on your patients; just don't let them know it is a placebo.
  • ...6 more annotations...
  • I should add that I am not criticising TCM per se. Only acupuncture, a facet of TCM, albeit its most dramatic, is being scrutinised here. Chinese herbology must be analysed on its own merits.Interestingly, although acupuncture may be TCM's poster boy today, the Chinese physician in days of yore would have looked askance at it. Instead, his practice and prestige were based upon his grasp of the Chinese pharmacopoeia.
  • Acupuncture was left to the shamans and blood letters. After all, it was grounded, not in the knowledge of which herbs were best for what conditions, but astrology.
  • In Giovanni Maciocia's 2005 book, The Foundations Of Chinese Medicine: A Comprehensive Text For Acupuncturists And Herbalists, there is a chart showing the astrological provenance of acupuncture. The chart shows how the 12 main acupuncture meridians and the 12 main body segments correspond to the 12 Houses of the Chinese zodiac.
  • In Chinese cosmology, all life is animated by a numinous force called qi, the flow of which mirrors the sun's apparent 'movement' during the year through the ecliptic. (The ecliptic is the imaginary plane of the earth's orbit around the sun).Moreover, everything in the Chinese zodiac is mirrored on Earth and in Man. This was taught even in the earliest systematised TCM text, the Yellow Emperor's Canon Of Medicine, thus: 'Heaven is covered with constellations, Earth with waterways, and man with channels.'This 'as above, so below' doctrine means that if there is qi flowing around in the imaginary closed loop of the zodiac, there is qi flowing correspondingly in the body's closed loop of imaginary meridians as well.
  • Note that not only is acupuncture astrological in origin but also the astrology is based on a model of the universe which has the earth at its centre. This geocentric model was an erroneous idea widely accepted before the Copernican revolution.
  • So should doctors check the daily horoscopes of their patients?
11More

Information technology and economic change: The impact of the printing press | vox - Re... - 0 views

  • Despite the revolutionary technological advance of the printing press in the 15th century, there is precious little economic evidence of its benefits. Using data on 200 European cities between 1450 and 1600, this column finds that economic growth was higher by as much as 60 percentage points in cities that adopted the technology.
  • Historians argue that the printing press was among the most revolutionary inventions in human history, responsible for a diffusion of knowledge and ideas, “dwarfing in scale anything which had occurred since the invention of writing” (Roberts 1996, p. 220). Yet economists have struggled to find any evidence of this information technology revolution in measures of aggregate productivity or per capita income (Clark 2001, Mokyr 2005). The historical data thus present us with a puzzle analogous to the famous Solow productivity paradox – that, until the mid-1990s, the data on macroeconomic productivity showed no effect of innovations in computer-based information technology.
  • In recent work (Dittmar 2010a), I examine the revolution in Renaissance information technology from a new perspective by assembling city-level data on the diffusion of the printing press in 15th-century Europe. The data record each city in which a printing press was established 1450-1500 – some 200 out of over 1,000 historic cities (see also an interview on this site, Dittmar 2010b). The research emphasises cities for three principal reasons. First, the printing press was an urban technology, producing for urban consumers. Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth. Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).
  • ...8 more annotations...
  • Figure 1 summarises the data and shows how printing diffused from Mainz 1450-1500. Figure 1. The diffusion of the printing press
  • City-level data on the adoption of the printing press can be exploited to examine two key questions: Was the new technology associated with city growth? And, if so, how large was the association? I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.
  • The restrictions on diffusion meant that cities relatively close to Mainz were more likely to receive the technology other things equal. Printing presses were established in 205 cities 1450-1500, but not in 40 of Europe’s 100 largest cities. Remarkably, regulatory barriers did not limit diffusion. Printing fell outside existing guild regulations and was not resisted by scribes, princes, or the Church (Neddermeyer 1997, Barbier 2006, Brady 2009).
  • Historians observe that printing diffused from Mainz in “concentric circles” (Barbier 2006). Distance from Mainz was significantly associated with early adoption of the printing press, but neither with city growth before the diffusion of printing nor with other observable determinants of subsequent growth. The geographic pattern of diffusion thus arguably allows us to identify exogenous variation in adoption. Exploiting distance from Mainz as an instrument for adoption, I find large and significant estimates of the relationship between the adoption of the printing press and city growth. I find a 60 percentage point growth advantage between 1500-1600.
  • The importance of distance from Mainz is supported by an exercise using “placebo” distances. When I employ distance from Venice, Amsterdam, London, or Wittenberg instead of distance from Mainz as the instrument, the estimated print effect is statistically insignificant.
  • Cities that adopted print media benefitted from positive spillovers in human capital accumulation and technological change broadly defined. These spillovers exerted an upward pressure on the returns to labour, made cities culturally dynamic, and attracted migrants. In the pre-industrial era, commerce was a more important source of urban wealth and income than tradable industrial production. Print media played a key role in the development of skills that were valuable to merchants. Following the invention printing, European presses produced a stream of math textbooks used by students preparing for careers in business.
  • These and hundreds of similar texts worked students through problem sets concerned with calculating exchange rates, profit shares, and interest rates. Broadly, print media was also associated with the diffusion of cutting-edge business practice (such as book-keeping), literacy, and the social ascent of new professionals – merchants, lawyers, officials, doctors, and teachers.
  • The printing press was one of the greatest revolutions in information technology. The impact of the printing press is hard to identify in aggregate data. However, the diffusion of the technology was associated with extraordinary subsequent economic dynamism at the city level. European cities were seedbeds of ideas and business practices that drove the transition to modern growth. These facts suggest that the printing press had very far-reaching consequences through its impact on the development of cities.
9More

What humans know that Watson doesn't - CNN.com - 0 views

  • One of the most frustrating experiences produced by the winter from hell is dealing with the airlines' automated answer systems. Your flight has just been canceled and every second counts in getting an elusive seat. Yet you are stuck in an automated menu spelling out the name of your destination city.
  • Even more frustrating is knowing that you will never get to ask the question you really want to ask, as it isn't an option: "If I drive to Newark and board my Flight to Tel Aviv there will you cancel my whole trip, as I haven't started from my ticketed airport of origin, Ithaca?"
  • A human would immediately understand the question and give you an answer. That's why knowledgeable travelers rush to the nearest airport when they experience a cancellation, so they have a chance to talk to a human agent who can override the computer, rather than rebook by phone (more likely wait on hold and listen to messages about how wonderful a destination Tel Aviv is) or talk to a computer.
  • ...6 more annotations...
  • There is no doubt the IBM supercomputer Watson gave an impressive performance on "Jeopardy!" this week. But I was worried by the computer's biggest fluff Tuesday night. In answer to the question about naming a U.S. city whose first airport is named after a World War II hero and its second after a World War II battle, it gave Toronto, Ontario. Not even close!
  • Both the humans on the program knew the correct answer: Chicago. Even a famously geographically challenged person like me
  • Why did I know it? Because I have spent enough time stranded at O'Hare to have visited the monument to Butch O'Hare in the terminal. Watson, who has not, came up with the wrong answer. This reveals precisely what Watson lacks -- embodiment.
  • Watson has never traveled anywhere. Humans travel, so we know all sorts of stuff about travel and airports that a computer doesn't know. It is the informal, tacit, embodied knowledge that is the hardest for computers to grasp, but it is often such knowledge that is most crucial to our lives.
  • Providing unique answers to questions limited to around 25 words is not the same as dealing with real problems of an emotionally distraught passenger in an open system where there may not be a unique answer.
  • Watson beating the pants out of us on "Jeopardy!" is fun -- rather like seeing a tractor beat a human tug-of-war team. Machines have always been better than humans at some tasks.
5More

Anonymous speaks: the inside story of the HBGary hack - 0 views

  • The attackers just needed a little bit more information: they needed a regular, non-root user account to log in with, because as a standard security procedure, direct ssh access with the root account is disabled. Armed with the two pieces of knowledge above, and with Greg's e-mail account in their control, the social engineers set about their task. The e-mail correspondence tells the whole story: From: Greg To: Jussi Subject: need to ssh into rootkit im in europe and need to ssh into the server. can you drop open up firewall and allow ssh through port 59022 or something vague? and is our root password still 88j4bb3rw0cky88 or did we change to 88Scr3am3r88 ? thanks
  • Thanks indeed. To be fair to Jussi, the fake Greg appeared to know the root password and, well, the e-mails were coming from Greg's own e-mail address. But over the course of a few e-mails it was clear that "Greg" had forgotten both his username and his password. And Jussi handed them to him on a platter. Later on, Jussi did appear to notice something was up: From: Jussi To: Greg Subject: Re: need to ssh into rootkit did you open something running on high port?
  • From: Jussi To: Greg Subject: Re: need to ssh into rootkit hi, do you have public ip? or should i just drop fw? and it is w0cky - tho no remote root access allowed
  • ...2 more annotations...
  • So there are clearly two lessons to be learned here. The first is that the standard advice is good advice. If all best practices had been followed then none of this would have happened. Even if the SQL injection error was still present, it wouldn't have caused the cascade of failures that followed.
  • The second lesson, however, is that the standard advice isn't good enough. Even recognized security experts who should know better won't follow it. What hope does that leave for the rest of us?
7More

RealClimate: Going to extremes - 0 views

  • There are two new papers in Nature this week that go right to the heart of the conversation about extreme events and their potential relationship to climate change.
  • Let’s start with some very basic, but oft-confused points: Not all extremes are the same. Discussions of ‘changes in extremes’ in general without specifying exactly what is being discussed are meaningless. A tornado is an extreme event, but one whose causes, sensitivity to change and impacts have nothing to do with those related to an ice storm, or a heat wave or cold air outbreak or a drought. There is no theory or result that indicates that climate change increases extremes in general. This is a corollary of the previous statement – each kind of extreme needs to be looked at specifically – and often regionally as well. Some extremes will become more common in future (and some less so). We will discuss the specifics below. Attribution of extremes is hard. There are limited observational data to start with, insufficient testing of climate model simulations of extremes, and (so far) limited assessment of model projections.
  • The two new papers deal with the attribution of a single flood event (Pall et al), and the attribution of increased intensity of rainfall across the Northern Hemisphere (Min et al). While these issues are linked, they are quite distinct, and the two approaches are very different too.
  • ...4 more annotations...
  • The aim of the Pall et al paper was to examine a specific event – floods in the UK in Oct/Nov 2000. Normally, with a single event there isn’t enough information to do any attribution, but Pall et al set up a very large ensemble of runs starting from roughly the same initial conditions to see how often the flooding event occurred. Note that flooding was defined as more than just intense rainfall – the authors tracked runoff and streamflow as part of their modelled setup. Then they repeated the same experiments with pre-industrial conditions (less CO2 and cooler temperatures). If the amount of times a flooding event would occur increased in the present-day setup, you can estimate how much more likely the event would have been because of climate change. The results gave varying numbers but in nine out of ten cases the chance increased by more than 20%, and in two out of three cases by more than 90%. This kind of fractional attribution (if an event is 50% more likely with anthropogenic effects, that implies it is 33% attributable) has been applied also to the 2003 European heatwave, and will undoubtedly be applied more often in future. One neat and interesting feature of these experiments was that they used the climateprediction.net set up to harness the power of the public’s idle screensaver time.
  • The second paper is a more standard detection and attribution study. By looking at the signatures of climate change in precipitation intensity and comparing that to the internal variability and the observation, the researchers conclude that the probability of intense precipitation on any given day has increased by 7 percent over the last 50 years – well outside the bounds of natural variability. This is a result that has been suggested before (i.e. in the IPCC report (Groisman et al, 2005), but this was the first proper attribution study (as far as I know). The signal seen in the data though, while coherent and similar to that seen in the models, was consistently larger, perhaps indicating the models are not sensitive enough, though the El Niño of 1997/8 may have had an outsize effect.
  • Both papers were submitted in March last year, prior to the 2010 floods in Pakistan, Australia, Brazil or the Philippines, and so did not deal with any of the data or issues associated with those floods. However, while questions of attribution come up whenever something weird happens to the weather, these papers demonstrate clearly that the instant pop-attributions we are always being asked for are just not very sensible. It takes an enormous amount of work to do these kinds of tests, and they just can’t be done instantly. As they are done more often though, we will develop a better sense for the kinds of events that we can say something about, and those we can’t.
  • There is always concern that the start and end points for any trend study are not appropriate (both sides are guilty on this IMO). I have read precipitation studies were more difficult due to sparse data, and it seems we would have seen precipitation trend graphs a lot more often by now if it was straight forward. 7% seems to be a large change to not have been noted (vocally) earlier, seems like there is more to this story.
3More

The Price of Fuel | How Countries Compare - 0 views

  • In 2008, crude oil topped $111 a barrel for the first time. During that time, the U.S. average retail price for regular unleaded gasoline reached $3.28 a gallon. Despite the increase, people in the United States still pay significantly less for gasoline than people in many other countries.
  • This chart depicts the elements of production, transportation, refining and distribution required to transform crude oil into finished petroleum products like gasoline.
  • In the United States, the average tax on gasoline is 47 cents per gallon. The tax is usually a combination of federal, state and local fees, underground storage tank fees and other environmental fees. Many European countries attach much higher fees as an incentive to reduce greenhouse gas emissions and raise revenue, increasing the overall price of gasoline.
15More

Roger Pielke Jr.'s Blog: Flood Disasters and Human-Caused Climate Change - 0 views

  • [UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.] [UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  
  • Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.
  • I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?
  • ...12 more annotations...
  • What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?
  • First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.
  • However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.
  • My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes: Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
  • Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.
  • Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave. Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.
  • More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.
  • In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.
  • Jessica Weinkle said...
  • The thing about the nature articles is that Nature itself made the leap from the science findings to damages in the News piece by Q. Schiermeier through the decision to bring up the topic of insurance. (Not to mention that which is symbolically represented merely by the journal’s cover this week). With what I (maybe, naively) believe to be a particularly ballsy move, the article quoted Muir-Wood, an industry scientists. However, what he is quoted as saying is admirably clever. Initially it is stated that Dr. Muir-Wood backs the notion that one cannot put the blame of increased losses on climate change. Then, the article ends with a quote from him, “If there’s evidence that risk is changing, then this is something we need to incorporate in our models.”
  • This is a very slippery slope and a brilliant double-dog dare. Without doing anything but sitting back and watching the headlines, one can form the argument that “science” supports the remodeling of the hazard risk above the climatological average and is more important then the risks stemming from socioeconomic factors. The reinsurance industry itself has published that socioeconomic factors far outweigh changes in the hazard in concern of losses. The point is (and that which has particularly gotten my knickers in a knot) is that Nature, et al. may wish to consider what it is that they want to accomplish. Is it greater involvement of federal governments in the insurance/reinsurance industry on the premise that climate change is too great a loss risk for private industry alone regardless of the financial burden it imposes? The move of insurance mechanisms into all corners of the earth under the auspices of climate change adaptation? Or simply a move to bolster prominence, regardless of whose back it breaks- including their own, if any of them are proud owners of a home mortgage? How much faith does one have in their own model when they are told that hundreds of millions of dollars in the global economy is being bet against the odds that their models produce?
  • What Nature says matters to the world; what scientists say matters to the world- whether they care for the responsibility or not. That is after all, the game of fame and fortune (aka prestige).
14More

The world through language » Scienceline - 0 views

  • If you know only one language, you live only once. A man who knows two languages is worth two men. He who loses his language loses his world. (Czech, French and Gaelic proverbs.)
  • The hypothesis first put forward fifty years ago by linguist Benjamin Lee Whorf—that our language significantly affects our experience of the world—is making a comeback in various forms, and with it no shortage of debate.
  • The idea that language shapes thought was taboo for a long time, said Dan Slobin, a psycholinguist at the University of California, Berkeley. “Now the ice is breaking.” The taboo, according to Slobin, was largely due to the widespread acceptance of the ideas of Noam Chomsky, one of the most influential linguists of the 20th century. Chomsky proposed that the human brain comes equipped at birth with a set of rules—or universal grammar—that organizes language. As he likes to say, a visiting Martian would conclude that everyone on Earth speaks mutually unintelligible dialects of a single language.
  • ...11 more annotations...
  • Chomsky is hesitant to accept the recent claims of language’s profound influence on thought. “I’m rather skeptical about all of this, though there probably are some marginal effects,” he said.
  • Some advocates of the Whorfian view find support in studies of how languages convey spatial orientation. English and Dutch speakers describe orientation from an egocentric frame of reference (to my left or right). Mayan speakers use a geocentric frame of reference (to the north or south).
  • Does this mean they think about space in fundamentally different ways? Not exactly, said Lila Gleitman, a psychologist from the University of Pennsylvania. Since we ordinarily assume that others talk like us, she explained, vague instructions like “arrange it the same way” will be interpreted in whatever orientation (egocentric or geocentric) is most common in our language. “That’s going to influence how you solve an ambiguous problem, but it doesn’t mean that’s the way you think, or must think,” said Gleitman. In fact, she repeated the experiment with unambiguous instructions, providing cues to indicate whether objects should be arranged north-south or left-right. She found that people in both languages are just as good at arranging objects in either orientation.
  • Similarly, Anna Papafragou, a psychologist at the University of Delaware, thinks that the extent of language’s effect on thought has been somewhat exaggerated.
  • Papafragou compared how long Greek and English speakers paid attention to clip-art animation sequences, for example, a man skating towards a snowman. By measuring their eye movements, Papafragou was able to tell which parts of the scene held their gaze the longest. Because English speakers generally use verbs that describe manner of motion, like slide and skip, she predicted they would pay more attention to what was moving (the skates). Since Greeks use verbs that describe path, like approach and ascend, they should pay more attention to endpoint of the motion (the snowman). She found that this was true only when people had to describe the scene; when asked to memorize it, attention patterns were nearly identical. According to Papafragou, when people need to speak about what they see, they’ll focus on the parts relevant for planning sentences. Otherwise, language does not show much of an effect on attention.
  • “Each language is a bright transparent medium through which our thoughts may pass, relatively undistorted,” said Gleitman.
  • Others think that language does, in fact, introduce some distortion. Linguist Guy Deutscher of the University of Manchester in the U.K. suggests that while language can’t prevent you from thinking anything, it does compel you to think in specific ways. Language forces you to habitually pay attention to different aspects of the world.
  • For example, many languages assign genders to nouns (“bridge” is feminine in German and masculine in Spanish). A study by cognitive psychologist Lera Boroditsky of Stanford University found that German speakers were more likely to describe “bridge” with feminine terms like elegant and slender, while Spanish speakers picked words like sturdy and towering. Having to constantly keep track of gender, Deutscher suggests, may subtly change the way native speakers imagine object’s characteristics.
  • However, this falls short of the extreme view some ascribe to Whorf: that language actually determines thought. According to Steven Pinker, an experimental psychologist and linguist at Harvard University, three things have to hold for the Whorfian hypothesis to be true: speakers of one language should find it nearly impossible to think like speakers of another language; the differences in language should affect actual reasoning; and the differences should be caused by language, not just correlated with it. Otherwise, we may just be dealing with a case of “crying Whorf.”
  • But even mild claims may reveal complexities in the relationship between language and thought. “You can’t actually separate language, thought and perception,” said Debi Roberson, a psychologist at the University of Essex in the U.K. “All of these processes are going on, not just in parallel, but interactively.”
  • Language may not, as the Gaelic proverb suggests, form our entire world. But it will continue to provide insights into our thoughts—whether as a window, a looking glass, or a distorted mirror.
8More

How Is Twitter Impacting Search and SEO? Here's the (Visual) Proof | MackCollier.com - ... - 0 views

  • I picked a fairly specific term, in “Social Media Crisis Management”.  I checked prior to publishing yesterday’s post, and there were just a shade under 29,000 Google results for that term.  This is important because you need to pick the most specific term as possible, because this will result in less competition, and (if you’ve picked the right term for you) it means you will be more likely to get the ‘right’ kind of traffic.
  • Second, I made sure the term was in the title and mentioned a couple of times in the post.  I also made the term “Social Media Crisis Management” at the front of the post title, I originally had the title as “A No-Nonsense Guide to Social Media Crisis Management” but Amy wisely suggested that I flip it so the term I was targeting was at the front of the title.
  • when I published the post yesterday at 12:20pm, there were 28,900 Google results for the term “Social Media Crisis Management”.  I tweeted a link to it at that time.  Fifty minutes later at 1:10pm, the post was already showing up on the 3rd page for a Google search of #Social Media Crisis Management”:
  • ...5 more annotations...
  • I tweeted out another link to the post around 2pm, and then at 2:30pm, it moved a bit further up the results on the 3rd page:
  • The Latest results factors in real-time linking behavior, so it is picking up all the tweets where my post was being RTed, and as a result, the top half of the Latest results for the term “Social Media Crisis Management” were completely devoted to MY post.
  • That’s a perfect example of how Twitter and Facebook sharing is now impacting Google results.  And it’s also a wonderful illustration of the value of being active on Twitter.  I tweeted a link to that post several times yesterday and this morning, which was a big reason why it moved up the Google results so quickly, and a big reason why it dominated the Latest results for that term.
  • there are two things I want you to take away from this: 1 – This was very basic SEO stuff that any of you can do.  It was simply a case of targeting a specific phrase, and inserting it in the post.  Now as far as my having a large and engaged Twitter network and readership here (thanks guys!), that definitely played a big factor in the post moving up the results so quickly.  But at a basic level, everything I did from a SEO perspective is what you can do with every post.  And you should.
  • 2 – You can best learn by breaking stuff.  There are a gazillion ‘How to’ and ’10 Steps to…’ articles about using social media, and I have certainly written my fair share of these.  But the best way *I* learn is if you can show me the first 1 or 2 steps, then leave me alone and let me figure out the remaining 8 or 9 steps for myself.  Don’t just blindly follow my social media advice or anyone else’s.  Use the advice as a guide for how you can get started.  But there is no one RIGHT way to use social media.  Never forget that.  I can tell you what works for me and my clients, but you still need to tweak any advice so that it is perfect for you.  SEO geeks will no doubt see a ton of things that I could have done or altered in this experiment to get even better results.  And moving forward, I am going to continue to tweak and ‘break stuff’ in order to better figure out how all the moving parts work together.
9More

BBC News - Graduates - the new measure of power - 0 views

  • There are more universities operating in other countries, recruiting students from overseas, setting up partnerships, providing online degrees and teaching in other languages than ever before. Capturing the moment: South Korea has turned itself into a global player in higher education Chinese students are taking degrees taught in English in Finnish universities; the Sorbonne is awarding French degrees in Abu Dhabi; US universities are opening in China and South Korean universities are switching teaching to English so they can compete with everyone else. It's like one of those board games where all the players are trying to move on to everyone else's squares. It's not simply a case of western universities looking for new markets. Many countries in the Middle East and Asia are deliberately seeking overseas universities, as a way of fast-forwarding a research base.
  • "There's a world view that universities, and the most talented people in universities, will operate beyond sovereignty. "Much like in the renaissance in Europe, when the talent class and the creative class travelled among the great idea capitals, so in the 21st century, the people who carry the ideas that will shape the future will travel among the capitals.
  • "But instead of old European names it will be names like Shanghai and Abu Dhabi and London and New York. Those universities will be populated by those high-talent people." New York University, one of the biggest private universities in the US, has campuses in New York and Abu Dhabi, with plans for another in Shanghai. It also has a further 16 academic centres around the world. Mr Sexton sets out a different kind of map of the world, in which universities, with bases in several cities, become the hubs for the economies of the future, "magnetising talent" and providing the ideas and energy to drive economic innovation.
  • ...6 more annotations...
  • Universities are also being used as flag carriers for national economic ambitions - driving forward modernisation plans. For some it's been a spectacularly fast rise. According to the OECD, in the 1960s South Korea had a similar national wealth to Afghanistan. Now it tops international education league tables and has some of the highest-rated universities in the world. The Pohang University of Science and Technology in South Korea was only founded in 1986 - and is now in the top 30 of the Times Higher's global league table, elbowing past many ancient and venerable institutions. It also wants to compete on an international stage so the university has decided that all its graduate programmes should be taught in English rather than Korean.
  • governments want to use universities to upgrade their workforce and develop hi-tech industries.
  • "Universities are being seen as a key to the new economies, they're trying to grow the knowledge economy by building a base in universities," says Professor Altbach. Families, from rural China to eastern Europe, are also seeing university as a way of helping their children to get higher-paid jobs. A growing middle-class in India is pushing an expansion in places. Universities also stand to gain from recruiting overseas. "Universities in the rich countries are making big bucks," he says. This international trade is worth at least $50 billion a year, he estimates, the lion's share currently being claimed by the US.
  • Technology, much of it hatched on university campuses, is also changing higher education and blurring national boundaries.
  • It raises many questions too. What are the expectations of this Facebook generation? They might have degrees and be able to see what is happening on the other side of the world, but will there be enough jobs to match their ambitions? Who is going to pay for such an expanded university system? And what about those who will struggle to afford a place?
  • The success of the US system is not just about funding, says Professor Altbach. It's also because it's well run and research is effectively organised. "Of course there are lots of lousy institutions in the US, but overall the system works well." Continue reading the main story “Start Quote Developed economies are already highly dependent on universities and if anything that reliance will increase” End Quote David Willetts UK universities minister The status of the US system has been bolstered by the link between its university research and developing hi-tech industries. Icons of the internet-age such Google and Facebook grew out of US campuses.
10More

In Singapore, some thoughts are not All Right « Yawning Bread on Wordpress - 0 views

  • If you think R21 is the strictest classification a movie in Singapore can receive, think again. The Oscar-nominated drama The Kids Are All Right has been rated R21 and has also had an additional condition imposed on it. The Board of Film Censors (BFC) says that it can only be released on one print. This is likely to be the first time an R21 film will be screened under such a condition outside of a film festival.
  • Further down the news article, it was explained that the Board of Film Censors issued a letter earlier this week to the film’s distributor, Festive Films: It stated: ‘The majority of the members [of the Committee of Appeal] agreed with the board that the film normalises a homosexual family unit and has exceeded the film classification guidelines which states that ‘Films that promote or normalise a homosexual lifestyle cannot be allowed’.’ In addition, the committee said the fact that the film is allowed for release in Singapore at all was already a concession. It said: ‘Imposing a condition of one-print serves as a signal to the public at large that such alternative lifestyles should not be encouraged.’ – ibid
  • Firstly, can/should the civil service create additional rules at whim? Secondly, why is the idea of two gay persons raising a family considered something to be defended against?
  • ...7 more annotations...
  • s it a proper mission of the State to demand that its citizens not think these thoughts? Is it the proper use of State power to deny or severely limit access to such ideas? It is all the more ridiculous when this film The Kids Are All Right has been nominated for four Oscars this year — for Best Picture, Best Original Screenplay, Best Actress and Best Supporting Actor. Much of the world is talking about the film and the issues it raises, and the Singapore government is determined to make up our minds about the matter and give Singaporeans as little opportunity as possible to see the film for ourselves. All the while, the propaganda goes on: We are a world-class global city.
  • The root problem, as I have argued many times before, is the failure of our government to respect the constitution, which mandates freedom of expression. Instead, their guiding policy is to allow majoritarian views to ride roughshod over other points of view. Worse yet, sometimes it is even arguable whether the view being defended has majority support, since in the matter of film classification, the government appoints its own nominees as the “public”  consultation body. How do we know whether they represent the public?
  • As the press report above indicates, the government is waving, in this instance, the film classification guidelines because somewhere there is the clause that ‘Films that promote or normalise a homosexual lifestyle cannot be allowed’, words that the government itself penned. The exact words, not that I agree with them, in the current Guidelines are: Films should not promote or normalise a homosexual lifestyle. However, nonexploitative and non-explicit depictions of sexual activity between two persons of the same gender may be considered for R21. – http://www.mda.gov.sg/Documents/PDF/FilmClassificationGuidelines_Final2010.pdf, accessed 17 Feb 2011.
  • By the example of the treatment of this film, we now shine new light on the censorship impulse:  gay sex can be suggested in non-explicit ways in film, but gay people living ordinary, respectable lives, doing non-sexual things, (e.g. raising a family and looking after children) cannot. It really boils down to reinforcing a policy that has been in effect for a long time, and which I have found extremely insulting: Gay people can be depicted as deviants that come to tragic ends, but any positive portrayal must be cut out.
  • You would also notice that nowhere in this episode is reference made to the 2009/2010 Censorship Review Committee’s Report. This Committee I have already lambasted as timid and unprincipled. Yet, its (gutless) words are these: It is also not surprising that the CRC received many submissions calling for a lighter hand in the classification of films and plays which contain homosexual themes.  Homosexuality and other nontraditional lifestyles remain contentious issues for Singapore. While the MDA’s content regulators have to calibrate their decisions on ratings according to the majority, the CRC agrees that minority interests should also be considered and that a flexible and contextual approach should be taken for content depicting homosexuality. At the same time, clear and specific audience advisories should accompany the ratings so that the content issues will warn away those who think they may be offended by such content. – http://www.crc2009.sg/images/pdf/CRC%202010%20Report%20%28website%29.pdf, accessed 17 Feb 2011, para 24.
  • The government, in its Response to the CRC’s Report, said 63. Recommendation: A flexible and contextual approach for homosexual content should be adopted. Govt’s response: Agree. The current practice is already sufficiently flexible. Industry and artists must also be prepared to be more explicit in advising consumers on homosexual content. – http://www.crc2009.sg/images/pdf/Govt%27s%20Response%20to%20CRC%20Recommendations.pdf, accessed 17 Feb 2011.
  • And what do the civil servants do? They tighten up. They seize up like frigid vaginas and assholes at the very introduction of an Other. These civil servants create a new rule that limits the classified film to just one copy. They violate their own name and mission — “Film Classification” — by doing more than classification, branching into distribution limitation. To serve whose agenda?
20More

Mike Adams Remains True to Form « Alternative Medicine « Health « Skeptic North - 0 views

  • The 10:23 demonstrations and the CBC Marketplace coverage have elicited fascinating case studies in CAM professionalism. Rather than offering any new information or evidence about homeopathy itself, some homeopaths have spuriously accused skeptical groups of being malicious Big Pharma shills.
  • Mike Adams of the Natural News website
  • has decided to provide his own coverage of the 10:23 campaign
  • ...17 more annotations...
  • Mike’s thesis is essentially: Silly skeptics, it’s impossible to OD on homeopathy!
  • 1. “Notice that they never consume their own medicines in large doses? Chemotherapy? Statin drugs? Blood thinners? They wouldn’t dare drink those.
  • Of course we wouldn’t. Steven Novella rightly points out that, though Mike thinks he’s being clever here, he’s actually demonstrating a lack of understanding for what the 10:23 campaign is about by using a straw man. Mike later issues a challenge for skeptics to drink their favourite medicines while he drinks homeopathy. Since no one will agree to that for the reasons explained above, he can claim some sort of victory — hence his smugness. But no one is saying that drugs aren’t harmful.
  • The difference between medicine and poison is in the dose. The vitamins and herbs promoted by the CAM industry are just as potentially harmful as any pharmaceutical drug, given enough of it. Would Adams be willing to OD on the vitamins or herbal remedies that he sells?
  • Even Adams’ favorite panacea, vitamin D, is toxic if you take enough of it (just ask Gary Null). Notice how skeptics don’t consume those either, because that is not the point they’re making.
  • The point of these demonstrations is that homeopathy has nothing in it, has no measurable physiological effects, and does not do what is advertised on the package.
  • 2. “Homeopathy, you see, isn’t a drug. It’s not a chemical.” Well, he’s got that right. “You know the drugs are kicking in when you start getting worse. Toxicity and conventional medicine go hand in hand.” [emphasis his]
  • Here I have to wonder if Adams knows any people with diabetes, AIDS, or any other illness that used to mean a death sentence before the significant medical advances of the 20th century that we now take for granted. So far he seems to be a firm believer in the false dichotomy that drugs are bad and natural products are good, regardless of what’s in them or how they’re used (as we know, natural products can have biologically active substances and effectively act as impure drugs – but leave it to Adams not to get bogged down with details). There is nothing to support the assertion that conventional medicine is nothing but toxic symptom-inducers.
  • 3-11. “But homeopathy isn’t a chemical. It’s a resonance. A vibration, or a harmony. It’s the restructuring of water to resonate with the particular energy of a plant or substance. We can get into the physics of it in a subsequent article, but for now it’s easy to recognize that even from a conventional physics point of view, liquid water has tremendous energy, and it’s constantly in motion, not just at the molecular level but also at the level of its subatomic particles and so-called “orbiting electrons” which aren’t even orbiting in the first place. Electrons are vibrations and not physical objects.” [emphasis his]
  • This is Star Trek-like technobabble – lots of sciency words
  • if something — anything — has an effect, then that effect is measurable by definition. Either something works or it doesn’t, regardless of mechanism. In any case, I’d like to see the well-documented series of research that conclusively proves this supposed mechanism. Actually, I’d like to see any credible research at all. I know what the answer will be to that: science can’t detect this yet. Well if you agree with that statement, reader, ask yourself this: then how does Adams know? Where did he get this information? Without evidence, he is guessing, and what is that really worth?
  • 13. “But getting back to water and vibrations, which isn’t magic but rather vibrational physics, you can’t overdose on a harmony. If you have one violin playing a note in your room, and you add ten more violins — or a hundred more — it’s all still the same harmony (with all its complex higher frequencies, too). There’s no toxicity to it.” [emphasis his]
  • Homeopathy has standard dosing regimes (they’re all the same), but there is no “dose” to speak of: the ingredients have usually been diluted out to nothing. But Adams is also saying that homeopathy doesn’t work by dose at all, it works by the properties of “resonance” and “vibration”. Then why any dosing regimen? To maintain the resonance? How is this resonance measured? How long does the “resonance” last? Why does it wear off? Why does he think televisions can inactivate homeopathy? (I think I might know the answer to that last one, as electronic interference is a handy excuse for inefficacy.)
  • “These skeptics just want to kill themselves… and they wouldn’t mind taking a few of you along with them, too. Hence their promotion of vaccines, pharmaceuticals, chemotherapy and water fluoridation. We’ll title the video, “SKEPTICS COMMIT MASS SUICIDE BY DRINKING PHARMACEUTICALS AS IF THEY WERE KOOL-AID.” Jonestown, anyone?”
  • “Do you notice the irony here? The only medicines they’re willing to consume in large doses in public are homeopathic remedies! They won’t dare consume large quantities of the medicines they all say YOU should be taking! (The pharma drugs.)” [emphasis his]
  • what Adams seems to have missed is that the skeptics have no intention of killing themselves, so his bizarre claims that the 10:23 participants are psychopathic, self-loathing, and suicidal makes not even a little bit of sense. Skeptics know they aren’t going to die with these demonstrations, because homeopathy has no active ingredients and no evidence of efficacy.
  • The inventor of homeopathy himself, Samuel Hahnemann believed that excessive doses of homeopathy could be harmful (see sections 275 and 276 of his Organon). Homeopaths are pros at retconning their own field to fit in with Hahnemann’s original ideas (inventing new mechanisms, such as water memory and resonance, in the face of germ theory). So how does Adams reconcile this claim?
2More

Early microscopes offered sharp vision : Nature News - 0 views

  • Inept modern reconstructions have given seventeenth-century instruments a bad name, says Ford. In contrast to the hazy images shown in some museums and television documentaries, the right lighting and focusing can produce micrographs of startling clarity using original microscopes or modern replicas ( see slideshow ).
  • A flea, as seen through an eighteenth-century microscope used poorly (left) and correctly (right).
« First ‹ Previous 181 - 200 of 273 Next › Last »
Showing 20 items per page