Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Model

Rss Feed Group items tagged

Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

How wise are crowds? - 0 views

  • n the past, economists trying to model the propagation of information through a population would allow any given member of the population to observe the decisions of all the other members, or of a random sampling of them. That made the models easier to deal with mathematically, but it also made them less representative of the real world.
    • Weiye Loh
       
      Random sampling is not representative
  • this paper does is add the important component that this process is typically happening in a social network where you can’t observe what everyone has done, nor can you randomly sample the population to find out what a random sample has done, but rather you see what your particular friends in the network have done,” says Jon Kleinberg, Tisch University Professor in the Cornell University Department of Computer Science, who was not involved in the research. “That introduces a much more complex structure to the problem, but arguably one that’s representative of what typically happens in real settings.”
    • Weiye Loh
       
      So random sampling is actually more accurate?
  • Earlier models, Kleinberg explains, indicated the danger of what economists call information cascades. “If you have a few crucial ingredients — namely, that people are making decisions in order, that they can observe the past actions of other people but they can’t know what those people actually knew — then you have the potential for information cascades to occur, in which large groups of people abandon whatever private information they have and actually, for perfectly rational reasons, follow the crowd,”
  • ...8 more annotations...
  • The MIT researchers’ paper, however, suggests that the danger of information cascades may not be as dire as it previously seemed.
  • a mathematical model that describes attempts by members of a social network to make binary decisions — such as which of two brands of cell phone to buy — on the basis of decisions made by their neighbors. The model assumes that for all members of the population, there is a single right decision: one of the cell phones is intrinsically better than the other. But some members of the network have bad information about which is which.
  • The MIT researchers analyzed the propagation of information under two different conditions. In one case, there’s a cap on how much any one person can know about the state of the world: even if one cell phone is intrinsically better than the other, no one can determine that with 100 percent certainty. In the other case, there’s no such cap. There’s debate among economists and information theorists about which of these two conditions better reflects reality, and Kleinberg suggests that the answer may vary depending on the type of information propagating through the network. But previous models had suggested that, if there is a cap, information cascades are almost inevitable.
  • if there’s no cap on certainty, an expanding social network will eventually converge on an accurate representation of the state of the world; that wasn’t a big surprise. But they also showed that in many common types of networks, even if there is a cap on certainty, convergence will still occur.
  • people in the past have looked at it using more myopic models,” says Acemoglu. “They would be averaging type of models: so my opinion is an average of the opinions of my neighbors’.” In such a model, Acemoglu says, the views of people who are “oversampled” — who are connected with a large enough number of other people — will end up distorting the conclusions of the group as a whole.
  • What we’re doing is looking at it in a much more game-theoretic manner, where individuals are realizing where the information comes from. So there will be some correction factor,” Acemoglu says. “If I’m seeing you, your action, and I’m seeing Munzer’s action, and I also know that there is some probability that you might have observed Munzer, then I discount his opinion appropriately, because I know that I don’t want to overweight it. And that’s the reason why, even though you have these influential agents — it might be that Munzer is everywhere, and everybody observes him — that still doesn’t create a herd on his opinion.”
  • the new paper leaves a few salient questions unanswered, such as how quickly the network will converge on the correct answer, and what happens when the model of agents’ knowledge becomes more complex.
  • the MIT researchers begin to address both questions. One paper examines rate of convergence, although Dahleh and Acemoglu note that that its results are “somewhat weaker” than those about the conditions for convergence. Another paper examines cases in which different agents make different decisions given the same information: some people might prefer one type of cell phone, others another. In such cases, “if you know the percentage of people that are of one type, it’s enough — at least in certain networks — to guarantee learning,” Dahleh says. “I don’t need to know, for every individual, whether they’re for it or against it; I just need to know that one-third of the people are for it, and two-thirds are against it.” For instance, he says, if you notice that a Chinese restaurant in your neighborhood is always half-empty, and a nearby Indian restaurant is always crowded, then information about what percentages of people prefer Chinese or Indian food will tell you which restaurant, if either, is of above-average or below-average quality.
  •  
    By melding economics and engineering, researchers show that as social networks get larger, they usually get better at sorting fact from fiction.
Weiye Loh

RealClimate: Feedback on Cloud Feedback - 0 views

  • I have a paper in this week’s issue of Science on the cloud feedback
  • clouds are important regulators of the amount of energy in and out of the climate system. Clouds both reflect sunlight back to space and trap infrared radiation and keep it from escaping to space. Changes in clouds can therefore have profound impacts on our climate.
  • A positive cloud feedback loop posits a scenario whereby an initial warming of the planet, caused, for example, by increases in greenhouse gases, causes clouds to trap more energy and lead to further warming. Such a process amplifies the direct heating by greenhouse gases. Models have been long predicted this, but testing the models has proved difficult.
  • ...8 more annotations...
  • Making the issue even more contentious, some of the more credible skeptics out there (e.g., Lindzen, Spencer) have been arguing that clouds behave quite differently from that predicted by models. In fact, they argue, clouds will stabilize the climate and prevent climate change from occurring (i.e., clouds will provide a negative feedback).
  • In my new paper, I calculate the energy trapped by clouds and observe how it varies as the climate warms and cools during El Nino-Southern Oscillation (ENSO) cycles. I find that, as the climate warms, clouds trap an additional 0.54±0.74W/m2 for every degree of warming. Thus, the cloud feedback is likely positive, but I cannot rule out a slight negative feedback.
  • while a slight negative feedback cannot be ruled out, the data do not support a negative feedback large enough to substantially cancel the well-established positive feedbacks, such as water vapor, as Lindzen and Spencer would argue.
  • I have also compared the results to climate models. Taken as a group, the models substantially reproduce the observations. This increases my confidence that the models are accurately simulating the variations of clouds with climate change.
  • Dr. Spencer is arguing that clouds are causing ENSO cycles, so the direction of causality in my analysis is incorrect and my conclusions are in error. After reading this, I initiated a cordial and useful exchange of e-mails with Dr. Spencer (you can read the full e-mail exchange here). We ultimately agreed that the fundamental disagreement between us is over what causes ENSO. Short paraphrase: Spencer: ENSO is caused by clouds. You cannot infer the response of clouds to surface temperature in such a situation. Dessler: ENSO is not caused by clouds, but is driven by internal dynamics of the ocean-atmosphere system. Clouds may amplify the warming, and that’s the cloud feedback I’m trying to measure.
  • My position is the mainstream one, backed up by decades of research. This mainstream theory is quite successful at simulating almost all of the aspects of ENSO. Dr. Spencer, on the other hand, is as far out of the mainstream when it comes to ENSO as he is when it comes to climate change. He is advancing here a completely new and untested theory of ENSO — based on just one figure in one of his papers (and, as I told him in one of our e-mails, there are other interpretations of those data that do not agree with his interpretation). Thus, the burden of proof is Dr. Spencer to show that his theory of causality during ENSO is correct. He is, at present, far from meeting that burden. And until Dr. Spencer satisfies this burden, I don’t think anyone can take his criticisms seriously.
  • It’s also worth noting that the picture I’m painting of our disagreement (and backed up by the e-mail exchange linked above) is quite different from the picture provided by Dr. Spencer on his blog. His blog is full of conspiracies and purposeful suppression of the truth. In particular, he accuses me of ignoring his work. But as you can see, I have not ignored it — I have dismissed it because I think it has no merit. That’s quite different. I would also like to respond to his accusation that the timing of the paper is somehow connected to the IPCC’s meeting in Cancun. I can assure everyone that no one pressured me in any aspect of the publication of this paper. As Dr. Spencer knows well, authors have no control over when a paper ultimately gets published. And as far as my interest in influencing the policy debate goes, I’ll just say that I’m in College Station this week, while Dr. Spencer is in Cancun. In fact, Dr. Spencer had a press conference in Cancun — about my paper. I didn’t have a press conference about my paper. Draw your own conclusion.
  • This is but another example of how climate scientists are being played by the denialists. You attempted to discuss the issue with Spencer as if he were only doing science. But he is not. He is doing science and politics, and he has no compunction about sandbagging you. There is no gain to you in trying to deal with people like Spencer and Lindzen as colleagues. They are not trustworthy.
Weiye Loh

Roger Pielke Jr.'s Blog: Flood Disasters and Human-Caused Climate Change - 0 views

  • [UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.] [UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  
  • Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.
  • I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?
  • ...12 more annotations...
  • What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?
  • First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.
  • However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.
  • My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes: Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
  • Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.
  • Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave. Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.
  • More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.
  • In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.
  • Jessica Weinkle said...
  • The thing about the nature articles is that Nature itself made the leap from the science findings to damages in the News piece by Q. Schiermeier through the decision to bring up the topic of insurance. (Not to mention that which is symbolically represented merely by the journal’s cover this week). With what I (maybe, naively) believe to be a particularly ballsy move, the article quoted Muir-Wood, an industry scientists. However, what he is quoted as saying is admirably clever. Initially it is stated that Dr. Muir-Wood backs the notion that one cannot put the blame of increased losses on climate change. Then, the article ends with a quote from him, “If there’s evidence that risk is changing, then this is something we need to incorporate in our models.”
  • This is a very slippery slope and a brilliant double-dog dare. Without doing anything but sitting back and watching the headlines, one can form the argument that “science” supports the remodeling of the hazard risk above the climatological average and is more important then the risks stemming from socioeconomic factors. The reinsurance industry itself has published that socioeconomic factors far outweigh changes in the hazard in concern of losses. The point is (and that which has particularly gotten my knickers in a knot) is that Nature, et al. may wish to consider what it is that they want to accomplish. Is it greater involvement of federal governments in the insurance/reinsurance industry on the premise that climate change is too great a loss risk for private industry alone regardless of the financial burden it imposes? The move of insurance mechanisms into all corners of the earth under the auspices of climate change adaptation? Or simply a move to bolster prominence, regardless of whose back it breaks- including their own, if any of them are proud owners of a home mortgage? How much faith does one have in their own model when they are told that hundreds of millions of dollars in the global economy is being bet against the odds that their models produce?
  • What Nature says matters to the world; what scientists say matters to the world- whether they care for the responsibility or not. That is after all, the game of fame and fortune (aka prestige).
Weiye Loh

Rationally Speaking: Should non-experts shut up? The skeptic's catch-22 - 0 views

  • You can read the talk here, but in a nutshell, Massimo was admonishing skeptics who reject the scientific consensus in fields in which they have no technical expertise - the most notable recent example of this being anthropogenic climate change, about which venerable skeptics like James Randi and Michael Shermer have publicly expressed doubts (though Shermer has since changed his mind).
  • I'm totally with Massimo that it seems quite likely that anthropogenic climate change is really happening. But I'm not sure I can get behind Massimo's broader argument that non-experts should defer to the expert consensus in a field.
  • First of all, while there are strong incentives for a researcher to find errors in other work in the field, there are strong disincentives for her to challenge the field's foundational assumptions. It will be extremely difficult for her to get other people to agree with her if she tries, and if she succeeds, she'll still be taking herself down along with the rest of the field.
  • ...7 more annotations...
  • Second of all, fields naturally select for people who accept their foundational assumptions. People who don't accept those assumptions are likely not to have gone into that field in the first place, or to have left it already.
  • Sometimes those foundational assumptions are simple enough that an outsider can evaluate them - for instance, I may not be an expert in astrology or theology, but I can understand their starting premises (stars affect human fates; we should accept the Bible as the truth) well enough to confidently dismiss them, and the fields that rest on them. But when the foundational assumptions get more complex - like the assumption that we can reliably model future temperatures - it becomes much harder for an outsider to judge their soundness.
  • we almost seem to be stuck in a Catch-22: The only people who are qualified to evaluate the validity of a complex field are the ones who have studied that field in depth - in other words, experts. Yet the experts are also the people who have the strongest incentives not to reject the foundational assumptions of the field, and the ones who have self-selected for believing those assumptions. So the closer you are to a field, the more biased you are, which makes you a poor judge of it; the farther away you are, the less relevant knowledge you have, which makes you a poor judge of it. What to do?
  • luckily, the Catch-22 isn't quite as stark as I made it sound. For example, you can often find people who are experts in the particular methodology used by a field without actually being a member of the field, so they can be much more unbiased judges of whether that field is applying the methodology soundly. So for example, a foundational principle underlying a lot of empirical social science research is that linear regression is a valid tool for modeling most phenomena. I strongly recommend asking a statistics professor about that. 
  • there are some general criteria that outsiders can use to evaluate the validity of a technical field, even without “technical scientific expertise” in that field. For example, can the field make testable predictions, and does it have a good track record of predicting things correctly? This seems like a good criterion by which an outsider can judge the field of climate modeling (and "predictions" here includes using your model to predict past data accurately). I don't need to know how the insanely-complicated models work to know that successful prediction is a good sign.
  • And there are other more field-specific criteria outsiders can often use. For example, I've barely studied postmodernism at all, but I don't have to know much about the field to recognize that the fact that they borrow concepts from complex disciplines which they themselves haven't studied is a red flag.
  • the issue with AGW is less the science and all about the political solutions. Most every solution we hear in the public conversation requires some level of sacrifice and uncertainty in the future.Politicians, neither experts in climatology nor economics, craft legislation to solve the problem through the lens of their own political ideology. At TAM8, this was pretty apparent. My honest opinion is that people who are AGW skeptics are mainly skeptics of the political solutions. If AGW was said to increase the GDP of the country by two to three times, I'm guessing you'd see a lot less climate change skeptics.
  •  
    WEDNESDAY, JULY 14, 2010 Should non-experts shut up? The skeptic's catch-22
Weiye Loh

Epiphenom: The evolution of dissent - 0 views

  • Genetic evolution in humans occurs in an environment shaped by culture - and culture, in turn is shaped by genetics.
  • If religion is a virus, then perhaps the spread of religion can be understood through the lens of evolutionary theory. Perhaps cultural evolution can be modelled using the same mathematical tools applied to genetic evolution.
  • Michael Doebli and Iaroslav Ispolatov at the University of  British Columbia
  • ...6 more annotations...
  • set out to model was the development of religious schisms. Such schisms are a recurrent feature of religion, especially in the West. The classic example is the fracturing of Christianity that occured after the reformation.
  • Their model made two simple assumptions. Firstly, that religions that are highly dominant actually induce some people to want to break away from them. When a religion becomes overcrowded, then some individuals will lose their religion and take up another.
  • Second, they assume that every religion has a value to the individual that is composed of it's costs and benefits. That value varies between religion, but is the same for all individuals. It's a pretty simplistic assumption, but even so they get some interesting results.
  • Now, this is a very simple model, and so the results shouldn't be over-interpreted. But it's a fascinating result for a couple of reasons. It shows how new religious 'species' can come into being in a mixed population - no need for geographical separation. That's such a common feature of religion - from the Judaeo-Christian religions to examples from Papua New Guinea - that it's worth trying to understand what drives it. What's more, this is the first time that anyone has attempted to model the transmission of religious ideas in evolutionary terms. It's a first step, to be sure, but just showing that it can be done is a significant achievement.
  • The value comes because it shifts the focus from thinking about how culture benefits the host, and instead asks how the cultural trait is adaptive in it's own right. What is important is not whether or not the human host benefits from the trait, but rather whether the trait can successfully transmit and reproducing itself (see Bible Belter for an example of how this could work).
  • Even more intriguing is the implications for understanding cultural-genetic co-evolution. After all, we know that viruses and their hosts co-evolve in a kind of arms race - sometimes ending up in a relationship that benefits both.
  •  
    Genetic evolution in humans occurs in an environment shaped by culture - and culture, in turn is shaped by genetics
Weiye Loh

Models, Plain and Fancy - NYTimes.com - 0 views

  • Karl Smith argues that informal economic arguments — models in the sense of thought experiments, not necessarily backed by equations and/or data-crunching — deserve more respect from the profession.
  • misunderstandings in economics come about because people don’t have in their minds any intuitive notion of what it is they’re supposed to be modeling.
  • And Karl Smith is right: no way could Hume have published such a thing in a modern journal. So yes, simple intuitive stories are important, and deserve more credit.
  • ...1 more annotation...
  • You could argue that modern economics really began with David Hume’s Of the Balance of Trade, whose core is a gloriously clear thought experiment
Weiye Loh

RealClimate: Going to extremes - 0 views

  • There are two new papers in Nature this week that go right to the heart of the conversation about extreme events and their potential relationship to climate change.
  • Let’s start with some very basic, but oft-confused points: Not all extremes are the same. Discussions of ‘changes in extremes’ in general without specifying exactly what is being discussed are meaningless. A tornado is an extreme event, but one whose causes, sensitivity to change and impacts have nothing to do with those related to an ice storm, or a heat wave or cold air outbreak or a drought. There is no theory or result that indicates that climate change increases extremes in general. This is a corollary of the previous statement – each kind of extreme needs to be looked at specifically – and often regionally as well. Some extremes will become more common in future (and some less so). We will discuss the specifics below. Attribution of extremes is hard. There are limited observational data to start with, insufficient testing of climate model simulations of extremes, and (so far) limited assessment of model projections.
  • The two new papers deal with the attribution of a single flood event (Pall et al), and the attribution of increased intensity of rainfall across the Northern Hemisphere (Min et al). While these issues are linked, they are quite distinct, and the two approaches are very different too.
  • ...4 more annotations...
  • The aim of the Pall et al paper was to examine a specific event – floods in the UK in Oct/Nov 2000. Normally, with a single event there isn’t enough information to do any attribution, but Pall et al set up a very large ensemble of runs starting from roughly the same initial conditions to see how often the flooding event occurred. Note that flooding was defined as more than just intense rainfall – the authors tracked runoff and streamflow as part of their modelled setup. Then they repeated the same experiments with pre-industrial conditions (less CO2 and cooler temperatures). If the amount of times a flooding event would occur increased in the present-day setup, you can estimate how much more likely the event would have been because of climate change. The results gave varying numbers but in nine out of ten cases the chance increased by more than 20%, and in two out of three cases by more than 90%. This kind of fractional attribution (if an event is 50% more likely with anthropogenic effects, that implies it is 33% attributable) has been applied also to the 2003 European heatwave, and will undoubtedly be applied more often in future. One neat and interesting feature of these experiments was that they used the climateprediction.net set up to harness the power of the public’s idle screensaver time.
  • The second paper is a more standard detection and attribution study. By looking at the signatures of climate change in precipitation intensity and comparing that to the internal variability and the observation, the researchers conclude that the probability of intense precipitation on any given day has increased by 7 percent over the last 50 years – well outside the bounds of natural variability. This is a result that has been suggested before (i.e. in the IPCC report (Groisman et al, 2005), but this was the first proper attribution study (as far as I know). The signal seen in the data though, while coherent and similar to that seen in the models, was consistently larger, perhaps indicating the models are not sensitive enough, though the El Niño of 1997/8 may have had an outsize effect.
  • Both papers were submitted in March last year, prior to the 2010 floods in Pakistan, Australia, Brazil or the Philippines, and so did not deal with any of the data or issues associated with those floods. However, while questions of attribution come up whenever something weird happens to the weather, these papers demonstrate clearly that the instant pop-attributions we are always being asked for are just not very sensible. It takes an enormous amount of work to do these kinds of tests, and they just can’t be done instantly. As they are done more often though, we will develop a better sense for the kinds of events that we can say something about, and those we can’t.
  • There is always concern that the start and end points for any trend study are not appropriate (both sides are guilty on this IMO). I have read precipitation studies were more difficult due to sparse data, and it seems we would have seen precipitation trend graphs a lot more often by now if it was straight forward. 7% seems to be a large change to not have been noted (vocally) earlier, seems like there is more to this story.
Weiye Loh

Random Thoughts Of A Free Thinker: The TCM vs. Western medicine debate -- a philosophic... - 0 views

  • there is a sub-field within the study of philosophy that looks at what should qualify as valid or certain knowledge. And one main divide in this sub-field would perhaps be the divide between empiricism and rationalism. Proponents of the former generally argue that only what can be observed by the senses should qualify as valid knowledge while proponents of the latter are more sceptical about sensory data since such data can be "false" (for example, optical illusions) and instead argue that valid knowledge should be knowledge that is congruent with reason.
  • Another significant divide in this sub-field would be the divide between positivism/scientism and non-positivism/scientism. Essentially, proponents of the former argue that only knowledge that is congruent with scientific reasoning or that can be scientifically proven should qualify as valid knowledge. In contrast, the proponents of non-positivism/scientism is of the stance that although scientific knowledge may indeed be a form of valid knowledge, it is not the only form of valid knowledge; knowledge derived from other sources or methods may be just as valid.
  • Evidently, the latter divide is relevant with regards to this debate over the validity of TCM, or alternative medicine in general, as a form of medical treatment vis-a-vis Western medicine, in that the general impression perhaps that while Western medicine is scientifically proven, the former is however not as scientifically proven. And thus, to those who abide by the stance of positivism/scientism, this will imply that TCM, or alternative medicine in general, is not as valid or reliable a form of medical treatment as Western medicine. On the other hand, as can be seen from the letters written in to the ST Forum to defend TCM, there are those who will argue that although TCM may not be as scientifically proven, this does not however imply that it is not a valid or reliable form of medical treatment.
  • ...6 more annotations...
  • Of course, while there are similarities between the positions adopted in the "positivism/scientism versus non-positivism/scientism" and "Western medicine versus alternative medicine" debates, I suppose that one main difference is however that the latter is not just a theoretical debate but involves people's health and lives.
  • As was mentioned earlier, the general impression is perhaps that while Western medicine, which generally has its roots in Western societies, is scientifically proven, TCM, or alternative medicine, is however not as scientifically proven. The former is thus regarded as the dominant mainstream model of medical treatment while non-Western medical knowledge or treatment is regarded as "alternative medicine".
  • The process by which the above impression was created was, according to the postcolonial theorists, a highly political one. Essentially, it may be argued that along with their political colonisation of non-European territories in the past, the European/Western colonialists also colonised the minds of those living in those territories. This means that along with colonisation, traditional forms of knowledge, including medical knowledge, and cultures in the colonised terrorities were relegated to a non-dominant, if not inferior, position vis-a-vis Western knowledge and culture. And as postcolonial theorists may argue, the legacy and aftermath of this process is still felt today and efforts should be made to reverse it.
  • In light of the above, the increased push to have non-Western forms of medical treatment be recognised as an equally valid model of medical treatment besides that of Western medicine may be seen as part of the effort to reverse the dominance of Western knowledge and culture set in place during the colonial period. Of course, this push to reverse Western dominance is especially relevant in recent times, in light of the economic and political rise of non-Western powers such as China and India (interestingly enough, to the best of my knowledge, when talking about "alternative medicine", people are usually referring to traditional Indian or Chinese medical treatments and not really traditional African medical treatment).
  • Here, it is worthwhile to pause and think for a while: if it is recognised that Western and non-Western medicine are different but equally valid models of medical treatment, would they be complimentary or competing models? Or would they be just different models?
  • Moving on, so far it would seem that , for at least the foreseeable future, Western medicine will retain its dominant "mainstream" position but who knows what the future may hold?
Weiye Loh

Learn to love uncertainty and failure, say leading thinkers | Edge question | Science |... - 0 views

  • Being comfortable with uncertainty, knowing the limits of what science can tell us, and understanding the worth of failure are all valuable tools that would improve people's lives, according to some of the world's leading thinkers.
  • he ideas were submitted as part of an annual exercise by the web magazine Edge, which invites scientists, philosophers and artists to opine on a major question of the moment. This year it was, "What scientific concept would improve everybody's cognitive toolkit?"
  • the public often misunderstands the scientific process and the nature of scientific doubt. This can fuel public rows over the significance of disagreements between scientists about controversial issues such as climate change and vaccine safety.
  • ...13 more annotations...
  • Carlo Rovelli, a physicist at the University of Aix-Marseille, emphasised the uselessness of certainty. He said that the idea of something being "scientifically proven" was practically an oxymoron and that the very foundation of science is to keep the door open to doubt.
  • "A good scientist is never 'certain'. Lack of certainty is precisely what makes conclusions more reliable than the conclusions of those who are certain: because the good scientist will be ready to shift to a different point of view if better elements of evidence, or novel arguments emerge. Therefore certainty is not only something of no use, but is in fact damaging, if we value reliability."
  • physicist Lawrence Krauss of Arizona State University agreed. "In the public parlance, uncertainty is a bad thing, implying a lack of rigour and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time," he said.
  • however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty, and incorporate it into models, is what makes science quantitative, rather than qualitative. Indeed, no number, no measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies they have, in essence, no meaning."
  • Neil Gershenfeld, director of the Massachusetts Institute of Technology's Centre for Bits and Atoms wants everyone to know that "truth" is just a model. "The most common misunderstanding about science is that scientists seek and find truth. They don't – they make and test models," he said.
  • Building models is very different from proclaiming truths. It's a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don't know, not a weakness to avoid. Bugs are features – violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom."
  • writer and web commentator Clay Shirky suggested that people should think more carefully about how they see the world. His suggestion was the Pareto principle, a pattern whereby the top 1% of the population control 35% of the wealth or, on Twitter, the top 2% of users send 60% of the messages. Sometimes known as the "80/20 rule", the Pareto principle means that the average is far from the middle.It is applicable to many complex systems, "And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world," said Shirky. "We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people. We should stop thinking that the largest future earthquake or market panic will be as large as the largest historical one; the longer a system persists, the likelier it is that an event twice as large as all previous ones is coming."
  • Kevin Kelly, editor-at-large of Wired, pointed to the value of negative results. "We can learn nearly as much from an experiment that does not work as from one that does. Failure is not something to be avoided but rather something to be cultivated. That's a lesson from science that benefits not only laboratory research, but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced."
  • Michael Shermer, publisher of the Skeptic Magazine, wrote about the importance of thinking "bottom up not top down", since almost everything in nature and society happens this way.
  • But most people don't see things that way, said Shermer. "Bottom up reasoning is counterintuitive. This is why so many people believe that life was designed from the top down, and why so many think that economies must be designed and that countries should be ruled from the top down."
  • Roger Schank, a psychologist and computer scientist, proposed that we should all know the true meaning of "experimentation", which he said had been ruined by bad schooling, where pupils learn that scientists conduct experiments and if we copy exactly what they did in our high school labs we will get the results they got. "In effect we learn that experimentation is boring, is something done by scientists and has nothing to do with our daily lives."Instead, he said, proper experiments are all about assessing and gathering evidence. "In other words, the scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained as the result of an experiment. But people who don't see their actions as experiments, and those who don't know how to reason carefully from data, will continue to learn less well from their own experiences than those who do
  • Lisa Randall, a physicist at Harvard University, argued that perhaps "science" itself would be a useful concept for wider appreciation. "The idea that we can systematically understand certain aspects of the world and make predictions based on what we've learned – while appreciating and categorising the extent and limitations of what we know – plays a big role in how we think.
  • "Many words that summarise the nature of science such as 'cause and effect', 'predictions', and 'experiments', as well as words that describe probabilistic results such as 'mean', 'median', 'standard deviation', and the notion of 'probability' itself help us understand more specifically what this means and how to interpret the world and behaviour within it."
Weiye Loh

Roger Pielke Jr.'s Blog: It Is Always the Media's Fault - 0 views

  • Last summer NCAR issued a dramatic press release announcing that oil from the Gulf spill would soon be appearing on the beaches of the Atlantic ocean.  I discussed it here. Here are the first four paragraphs of that press release: BOULDER—A detailed computer modeling study released today indicates that oil from the massive spill in the Gulf of Mexico might soon extend along thousands of miles of the Atlantic coast and open ocean as early as this summer. The modeling results are captured in a series of dramatic animations produced by the National Center for Atmospheric Research (NCAR) and collaborators. he research was supported in part by the National Science Foundation, NCAR’s sponsor. The results were reviewed by scientists at NCAR and elsewhere, although not yet submitted for peer-review publication. “I’ve had a lot of people ask me, ‘Will the oil reach Florida?’” says NCAR scientist Synte Peacock, who worked on the study. “Actually, our best knowledge says the scope of this environmental disaster is likely to reach far beyond Florida, with impacts that have yet to be understood.” The computer simulations indicate that, once the oil in the uppermost ocean has become entrained in the Gulf of Mexico’s fast-moving Loop Current, it is likely to reach Florida's Atlantic coast within weeks. It can then move north as far as about Cape Hatteras, North Carolina, with the Gulf Stream, before turning east. Whether the oil will be a thin film on the surface or mostly subsurface due to mixing in the uppermost region of the ocean is not known.
  • A few weeks ago NCAR's David Hosansky who presumably wrote that press release, asks whether NCAR got it wrong.  His answer?  No, not really: During last year’s crisis involving the massive release of oil into the Gulf of Mexico, NCAR issued a much-watched animation projecting that the oil could reach the Atlantic Ocean. But detectable amounts of oil never made it to the Atlantic, at least not in an easily visible form on the ocean surface. Not surprisingly, we’ve heard from a few people asking whether NCAR got it wrong. These events serve as a healthy reminder of a couple of things: *the difference between a projection and an actual forecast *the challenges of making short-term projections of natural processes that can act chaotically, such as ocean currents
  • What then went wrong? First, the projection. Scientists from NCAR, the Department of Energy’s Los Alamos National Laboratory, and IFM-GEOMAR in Germany did not make a forecast of where the oil would go. Instead, they issued a projection. While there’s not always a clear distinction between the two, forecasts generally look only days or hours into the future and are built mostly on known elements (such as the current amount of humidity in the atmosphere). Projections tend to look further into the future and deal with a higher number of uncertainties (such as the rate at which oil degrades in open waters and the often chaotic movements of ocean currents). Aware of the uncertainties, the scientific team projected the likely path of the spill with a computer model of a liquid dye. They used dye rather than actual oil, which undergoes bacterial breakdown, because a reliable method to simulate that breakdown was not available. As it turned out, the oil in the Gulf broke down quickly due to exceptionally strong bacterial action and, to some extent, the use of chemical dispersants.
  • ...3 more annotations...
  • Second, the challenges of short-term behavior. The Gulf's Loop Current acts as a conveyor belt, moving from the Yucatan through the Florida Straits into the Atlantic. Usually, the current curves northward near the Louisiana and Mississippi coasts—a configuration that would have put it on track to pick up the oil and transport it into open ocean. However, the current’s short-term movements over a few weeks or even months are chaotic and impossible to predict. Sometimes small eddies, or mini-currents, peel off, shifting the position and strength of the main current. To determine the threat to the Atlantic, the research team studied averages of the Loop Current’s past behavior in order to simulate its likely course after the spill and ran several dozen computer simulations under various scenarios. Fortunately for the East Coast, the Loop Current did not behave in its usual fashion but instead remained farther south than usual, which kept it far from the Louisiana and Mississippi coast during the crucial few months before the oil degraded and/or was dispersed with chemical treatments.
  • The Loop Current typically goes into a southern configuration about every 6 to 19 months, although it rarely remains there for very long. NCAR scientist Synte Peacock, who worked on the projection, explains that part of the reason the current is unpredictable is “no two cycles of the Loop Current are ever exactly the same." She adds that the cycles are influenced by such variables as how large the eddy is, where the current detaches and moves south, and how long it takes for the current to reform. Computer models can simulate the currents realistically, she adds. But they cannot predict when the currents will change over to a new cycle. The scientists were careful to explain that their simulations were a suite of possible trajectories demonstrating what was likely to happen, but not a definitive forecast of what would happen. They reiterated that point in a peer-reviewed study on the simulations that appeared last August in Environmental Research Letters. 
  • So who was at fault?  According to Hosansky it was those dummies in the media: These caveats, however, got lost in much of the resulting media coverage.Another perspective is that having some of these caveats in the press release might have been a good idea.
Weiye Loh

Roger Pielke Jr.'s Blog: Karen Clark on Catastrophe Models - 0 views

  • In the interview she recommends the use of benchmark metrics of model performance, highlights the important of understanding irreducible uncertainties and gives a nod toward the use of normalized disaster loss studies.  Deep in our archives you can find an example of a benchmarking study that might be of the sort that Clark is suggesting (here in PDF).
Weiye Loh

Physics Envy in Development (even worse than in Finance!) - 0 views

  • Andrew Lo and Mark Mueller at MIT have a paper called “WARNING: Physics Envy May Be Hazardous to Your Wealth,” also available as a video.
  • inability to recognize radical UNCERTAINTY is what leads to excessive confidence in mathematical models of reality, and then on to bad policy and prediction. 
  • key concept of the paper is to define a continuum of uncertainty from the less radical to the more radical. You get into trouble when you think there is a higher level of certainty than there really is. 1. Complete Certainty 2.  Risk without Uncertainty (randomness when you know the exact probability distribution) 3. Fully Reducible Uncertainty (known set of outcomes, known model, and lots of data, fits assumptions for classical statistical techniques, so you can get arbitrarily close to Type 2). 4. Partially Reducible Uncertainty (“model uncertainty”: “we are in a casino that may or may not be honest, and the rules tend to change from time to time without notice.”) 5: Irreducible Uncertainty:  Complete Ignorance (consult a priest or astrologer) Physics Envy in Development leads you to think you are in Type 2 or Type 3, when you are really in Type 4. This feeds the futile search for the Grand Unifying Theory of Development.
  •  
    Physics Envy in Development (even worse than in Finance!)
Weiye Loh

What If The Very Theory That Underlies Why We Need Patents Is Wrong? | Techdirt - 0 views

  • Scott Walker points us to a fascinating paper by Carliss Y. Baldwin and Eric von Hippel, suggesting that some of the most basic theories on which the patent system is based are wrong, and because of that, the patent system might hinder innovation.
  • numerous other research papers and case studies that suggest that the patent system quite frequently hinders innovation, but this one approaches it from a different angle than ones we've seen before, and is actually quite convincing. It looks at the putative putative theory that innovation comes from a direct profit motive of a single corporation looking to sell the good in market, and for that to work, the company needs to take the initial invention and get temporary monopoly protection to keep out competitors in order to recoup the cost of research and development.
  • the paper goes through a whole bunch of studies suggesting that quite frequently innovation happens through a very different process: either individuals or companies directly trying to solve a problem they themselves have (i.e., the initial motive is not to profit directly from sales, but to help them in something they were doing) or through a much more collaborative process, whereby multiple parties all contribute to the process of innovation, somewhat openly, recognizing that as each contributes some, everyone benefits. As the report notes: This result hinges on the fact that the innovative design itself is a non-rival good: each participant in a collaborative effort gets the value of the whole design, but incurs only a fraction of the design cost.
  • ...5 more annotations...
  • patents are designed to make that sort of thing more difficult, because it assumes that the initial act of invention is the key point, rather than all the incremental innovations built on top of it that all parties can benefit from.
  • the report points to numerous studies that show, when given the chance, many companies freely share their ideas with others, recognizing the direct benefit they get.
  • Even more importantly, the paper finds that due to technological advances and the ability to more rapidly and easily communicate and collaborate widely, these forms of innovation (innovation for direct use as well as collaborative innovation) are becoming more and more viable across a variety of industries, which in the past may have relied more on the old way of innovating (single company innovative for the profit of selling that product).
  • because of the ease of communication and collaboration these days, there's tremendous incentive for those companies that innovate for their own use to collaborate with others, since the benefit from others improving as well help improve their own uses. Thus, the overall incentives are to move much more to a collaborative form of innovation in the market. That has huge implications for a patent system designed to help the "old model" of innovation (producer inventing for the market) and not the increasingly regular one (collaborative innovation for usage).
  • no one is saying that producer-based innovation (company inventing to sell on the market) doesn't occur or won't continue to occur. But it is an open policy question as to whether or not our innovation policies should favor that model over other models -- when evidence suggests that a significant amount of innovation occurs in these other ways -- and that amount is growing rapidly.
  •  
    What If The Very Theory That Underlies Why We Need Patents Is Wrong? from the collaborative-innovation-at-work dept
Weiye Loh

Freakonomics » The Revolution Will Not Be Televised. But It Will Be Tweeted - 0 views

  • information alone does not destabilize an oppressive regime. In fact, more information (and the control of that information) is a major source of political strength for any ruling party. The state controlled media of North Korea is a current example of the power of propaganda, much as it was in the Soviet Union and Nazi Germany, where the state heavily subsidized the diffusion of radios during the 1930s to help spread Nazi propaganda.
  • changes in technology do not by themselves weaken the state. While Twitter played a role in the Iranian protests in 2009, the medium was used effectively by the Iranian regime to spread rumors and disinformation. But, if information becomes not just more widespread but more reliable, the regime’s chances of survival are significantly diminished. In this sense, though social media like Twitter and Facebook appear to be a scattered mess, they are more reliable than state controlled messages.
  • The model predicts that a given percentage increase in information reliability has exactly twice as large an effect on the regime’s chances as the same percentage increase in information quantity, so, overall, an information revolution that leads to roughly equal-sized percentage increases in both these characteristics will reduce a regime’s chances of surviving.-
  •  
    If the quantity of information available to citizens is sufficiently high, then the regime has a better chance of surviving. However, an increase in the reliability of information can reduce the regime's chances. These two effects are always in tension: a regime benefits from an increase in information quantity if and only if an increase in information reliability reduces its chances. The model allows for two kinds of information revolutions. In the first, associated with radio and mass newspapers under the totalitarian regimes of the early twentieth century, an increase in information quantity coincides with a shift towards media institutions more accommodative of the regime and, in this sense, a decrease in information reliability. In this case, both effects help the regime. In the second kind, associated with diffuse technologies like modern social media, an increase in information quantity coincides with a shift towards sources of information less accommodative of the regime and an increase in information reliability. This makes the quantity and reliability effects work against each other.
Weiye Loh

Roger Pielke Jr.'s Blog: Blind Spots in Australian Flood Policies - 0 views

  • better management of flood risks in Australia will depend up better data on flood risk.  However, collecting such data has proven problematic
  • As many Queenslanders affected by January’s floods are realising, riverine flood damage is commonly excluded from household insurance policies. And this is unlikely to change until councils – especially in Queensland – stop dragging their feet and actively assist in developing comprehensive data insurance companies can use.
  • ? Because there is often little available information that would allow an insurer to adequately price this flood risk. Without this, there is little economic incentive for insurers to accept this risk. It would be irresponsible for insurers to cover riverine flood without quantifying and pricing the risk accordingly.
  • ...8 more annotations...
  • The first step in establishing risk-adjusted premiums is to know the likelihood of the depth of flooding at each address. This information has to be address-specific because the severity of flooding can vary widely over small distances, for example, from one side of a road to the other.
  • A litany of reasons is given for withholding data. At times it seems that refusal stems from a view that insurance is innately evil. This is ironic in view of the gratuitous advice sometimes offered by politicians and commentators in the aftermath of extreme events, exhorting insurers to pay claims even when no legal liability exists and riverine flood is explicitly excluded from policies.
  • Risk Frontiers is involved in jointly developing the National Flood Information Database (NFID) for the Insurance Council of Australia with Willis Re, a reinsurance broking intermediary. NFID is a five year project aiming to integrate flood information from all city councils in a consistent insurance-relevant form. The aim of NFID is to help insurers understand and quantify their risk. Unfortunately, obtaining the base data for NFID from some local councils is difficult and sometimes impossible despite the support of all state governments for the development of NFID. Councils have an obligation to assess their flood risk and to establish rules for safe land development. However, many are antipathetic to the idea of insurance. Some states and councils have been very supportive – in New South Wales and Victoria, particularly. Some states have a central repository – a library of all flood studies and digital terrain models (digital elevation data). Council reluctance to release data is most prevalent in Queensland, where, unfortunately, no central repository exists.
  • Second, models of flood risk are sometimes misused:
  • many councils only undertake flood modelling in order to create a single design flood level, usually the so-called one-in-100 year flood. (For reasons given later, a better term is the flood with an 1% annual likelihood of being exceeded.)
  • Inundation maps showing the extent of the flood with a 1% annual likelihood of exceedance are increasingly common on council websites, even in Queensland. Unfortunately these maps say little about the depth of water at an address or, importantly, how depth varies for less probable floods. Insurance claims usually begin when the ground is flooded and increase rapidly as water rises above the floor level. At Windsor in NSW, for example, the difference in the water depth between the flood with a 1% annual chance of exceedance and the maximum possible flood is nine metres. In other catchments this difference may be as small as ten centimetres. The risk of damage is quite different in both cases and an insurer needs this information if they are to provide coverage in these areas.
  • The ‘one-in-100 year flood’ term is misleading. To many it is something that happens regularly once every 100 years — with the reliability of a bus timetable. It is still possible, though unlikely, that a flood of similar magnitude or even greater flood could happen twice in one year or three times in successive years.
  • The calculations underpinning this are not straightforward but the probability that an address exposed to a 1-in-100 year flood will experience such an event or greater over the lifetime of the house – 50 years say – is around 40%. Over the lifetime of a typical home mortgage – 25 years – the probability of occurrence is 22%. These are not good odds.
  •  
    John McAneney of Risk Frontiers at Macquarie University in Sydney identifies some opportunities for better flood policies in Australia.
Weiye Loh

The importance of culture change in open government | Government In The Lab - 0 views

  • Open government cannot succeed through technology only.  Open data, ideation platforms, cloud solutions, and social media are great tools but when they are used to deliver government services using existing models they can only deliver partial value, value which can not be measured and value that is unclear to anyone but the technology practitioners that are delivering the services.
  • It is this thinking that has led a small group of us to launch a new Group on Govloop called Culture Change and Open Government.  Bill Brantley wrote a great overview of the group which notes that “The purpose of this group is to create an international community of practice devoted to discussing how to use cultural change to bring about open government and to use this site to plan and stage unconferences devoted to cultural change“
  • “Open government is a citizen-centric philosophy and strategy that believes the best results are usually driven by partnerships between citizens and government, at all levels. It is focused entirely on achieving goals through increased efficiency, better management, information transparency, and citizen engagement and most often leverages newer technologies to achieve the desired outcomes. This is bringing business approaches, business technologies, to government“.
  •  
    open government has primarily been the domain of the technologist.  Other parts of the organization have not been considered, have not been educated, have not been organized around a new way of thinking, a new way of delivering value.  The organizational model, the culture itself, has not been addressed, the value of open government is not understood, it is not measurable, and it is not an approach that the majority of those in and around government have bought into.
Weiye Loh

Response to Guardian's Article on Singapore Elections | the kent ridge common - 0 views

  • Further, grumblings on Facebook accounts are hardly ‘anonymous’. Lastly, how anonymous can bloggers be, when every now and then a racist blogger gets arrested by the state? Think about it. These sorts of cases prove that the state does screen, survey and monitor the online community, and as all of us know there are many vehement anti-PAP comments and articles, much of which are outright slander and defamation.
  • Yet at the end of the day, it is the racist blogger, not the anti-government or anti-PAP blogger that gets arrested. The Singaporean model is a much more complex and sophisticated phenomenon than this Guardian writer gives it credit.
  • Why did this Guardian writer, anyway, pander to a favourite Western stereotype of that “far-off Asian undemocratic, repressive regime”? Is she really in Singapore as the Guardian claims? (“Kate Hodal in Singapore” is written at the top) Can the Guardian be anymore predictable and trite?
  • ...1 more annotation...
  • Can any Singaporean honestly say the she/he can conceive of a fellow Singaporean setting himself or herself on fire along Orchard Road or Shenton Way, as a result of desperate economic pressures or financial constraints? Can we even fathom the social and economic pressures that mobilized a whole people to protest and overthrow a corrupt, US-backed regime? (that is, not during elections time) Singapore has real problems, the People’s Action Party has its real problems, and there is indeed much room for improvement. Yet such irresponsible reporting by one of the esteemed newspapers from the UK is utterly disappointing, not constructive in the least sense, and utterly misrepresents our political situation (and may potentially provoke more irrationality in our society, leading people to ‘believe’ their affinity with their Arab peers which leads to more radicalism).
  •  
    Further, grumblings on Facebook accounts are hardly 'anonymous'. Lastly, how anonymous can bloggers be, when every now and then a racist blogger gets arrested by the state? Think about it. These sorts of cases prove that the state does screen, survey and monitor the online community, and as all of us know there are many vehement anti-PAP comments and articles, much of which are outright slander and defamation. Yet at the end of the day, it is the racist blogger, not the anti-government or anti-PAP blogger that gets arrested. The Singaporean model is a much more complex and sophisticated phenomenon than this Guardian writer gives it credit.
Weiye Loh

Does "Inclusion" Matter for Open Government? (The Answer Is, Very Much Indeed... - 0 views

  • But in the context of the Open Government Partnership and the 70 or so countries that have already committed themselves to this or are in the process I’m not sure that the world can afford to wait to see whether this correlation is direct, indirect or spurious especially if we can recognize that in the world of OGP, the currency of accumulation and concentration is not raw economic wealth but rather raw political power.
  • in the same way as there appears to be an association between the rise of the Internet and increasing concentrations of wealth one might anticipate that the rise of Internet enabled structures of government might be associated with the increasing concentration of political power in fewer and fewer hands and particularly the hands of those most adept at manipulating the artifacts and symbols of the new Internet age.
  • I am struck by the fact that while the OGP over and over talks about the importance and value and need for Open Government there is no similar or even partial call for Inclusive Government.  I’ve argued elsewhere how “Open”, in the absence of attention being paid to ensuring that the pre-conditions for the broadest base of participation will almost inevitably lead to the empowerment of the powerful. What I fear with the OGP is that by not paying even a modicum of attention to the issue of inclusion or inclusive development and participation that all of the idealism and energy that is displayed today in Brasilia is being directed towards the creation of the Governance equivalents of the Internet billionaires whatever that might look like.
  • ...1 more annotation...
  • crowd sourced public policy
  •  
    alongside the rise of the Internet and the empowerment of the Internet generation has emerged the greatest inequalities of wealth and privilege that any of the increasingly Internet enabled economies/societies have experienced at least since the great Depression and perhaps since the beginnings of systematic economic record keeping.  The association between the rise of inequality and the rise of the Internet has not yet been explained and if may simply be a coincidence but somehow I'm doubtful and we await a newer generation of rather more critical and less dewey economists to give us the models and explanations for this co-evolution.
Weiye Loh

The New Republic: Lessons From China And Singapore : NPR - 0 views

  • What do educators in Singapore and China do? By their own internal accounts, they do a great deal of rote learning and "teaching to the test." Even if our sole goal was to produce students who would contribute maximally to national economic growth — the primary, avowed goal of education in Singapore and China — we should reject their strategies, just as they themselves have rejected them.
  • both nations have conducted major educational reforms, concluding that a successful economy requires nourishing analytical abilities, active problem-solving, and the imagination required for innovation.
  • Observers of current practices in both Singapore and China conclude that the reforms have not really been implemented. Teacher pay is still linked to test scores, and thus the incentive structure to effectuate real change is lacking. In general, it's a lot easier to move toward rote learning than to move away from it
  • ...3 more annotations...
  • Moreover, the reforms are cabined by these authoritarian nations' fear of true critical freedom. In Singapore, nobody even attempts to use the new techniques when teaching about politics and contemporary problems. "Citizenship education" typically takes the form of analyzing a problem, proposing several possible solutions, and then demonstrating how the one chosen by government is the right one for Singapore.
  • One professor of communications (who has since left Singapore) reported on a recent attempt to lead a discussion of the libel suits in her class: "I can feel the fear in the room. …You can cut it with a knife."
  • Singapore and China are terrible models of education for any nation that aspires to remain a pluralistic democracy. They have not succeeded on their own business-oriented terms, and they have energetically suppressed imagination and analysis when it comes to the future of the nation and the tough choices that lie before it. If we want to turn to Asia for models, there are better ones to be found: Korea's humanistic liberal arts tradition, and the vision of Tagore and like-minded Indian educators.
  •  
    The New Republic: Lessons From China And Singapore by MARTHA C. NUSSBAUM
1 - 20 of 85 Next › Last »
Showing 20 items per page