Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged technologies

Rss Feed Group items tagged

Weiye Loh

Land Destroyer: Alternative Economics - 0 views

  • Peer to peer file sharing (P2P) has made media distribution free and has become the bane of media monopolies. P2P file sharing means digital files can be copied and distributed at no cost. CD's, DVD's, and other older forms of holding media are no longer necessary, nor is the cost involved in making them or distributing them along a traditional logistical supply chain. Disc burners, however, allow users the ability to create their own physical copies at a fraction of the cost of buying the media from the stores. Supply and demand is turned on its head as the more popular a certain file becomes via demand, the more of it that is available for sharing, and the easier it is to obtain. Supply and demand increase in tandem towards a lower "price" of obtaining the said file.Consumers demand more as price decreases. Producersnaturally want to produce more of something as priceincreases. Somewhere in between consumers and producers meet at the market price or "marketequilibrium."P2P technology eliminates material scarcity, thus the more afile is in demand, the more people end up downloading it, andthe easier it is for others to find it and download it. Considerthe implications this would have if technology made physicalobjects as easy to "share" as information is now.
  • In the end, it is not government regulations, legal contrivances, or licenses that govern information, but rather the free market mechanism commonly referred to as Adam Smith's self regulating "Invisible Hand of the Market." In other words, people selfishly seeking accurate information for their own benefit encourage producers to provide the best possible information to meet their demand. While this is not possible in a monopoly, particularly the corporate media monopoly of the "left/right paradigm" of false choice, it is inevitable in the field of real competition that now exists online due to information technology.
  • Compounding the establishment's troubles are cheaper cameras and cheaper, more capable software for 3D graphics, editing, mixing, and other post production tasks, allowing for the creation of an alternative publishing, audio and video industry. "Underground" counter-corporate music and film has been around for a long time but through the combination of technology and the zealous corporate lawyers disenfranchising a whole new generation that now seeks an alternative, it is truly coming of age.
  • ...3 more annotations...
  • With a growing community of people determined to become collaborative producers rather than fit into the producer/consumer paradigm, and 3D files for physical objects already being shared like movies and music, the implications are profound. Products, and the manufacturing technology used to make them will continue to drop in price, become easier to make for individuals rather than large corporations, just as media is now shifting into the hands of the common people. And like the shift of information, industry will move from the elite and their agenda of preserving their power, to the end of empowering the people.
  • In a future alternative economy where everyone is a collaborative designer, producer, and manufacturer instead of passive consumers and when problems like "global climate change," "overpopulation," and "fuel crises" cross our path, we will counter them with technical solutions, not political indulgences like carbon taxes, and not draconian decrees like "one-child policies."
  • We will become the literal architects of our own future in this "personal manufacturing" revolution. While these technologies may still appear primitive, or somewhat "useless" or "impractical" we must remember where our personal computers stood on the eve of the dawning of the information age and how quickly they changed our lives. And while many of us may be unaware of this unfolding revolution, you can bet the globalists, power brokers, and all those that stand to lose from it not only see it but are already actively fighting against it.Understandably it takes some technical know-how to jump into the personal manufacturing revolution. In part 2 of "Alternative Economics" we will explore real world "low-tech" solutions to becoming self-sufficient, local, and rediscover the empowerment granted by doing so.
Weiye Loh

Skepticblog » The Immortalist - 0 views

  • There is something almost religious about Kurzweil’s scientism, an observation he himself makes in the film, noting the similarities between his goals and that of the world’s religions: “the idea of a profound transformation in the future, eternal life, bringing back the dead—but the fact that we’re applying technology to achieve the goals that have been talked about in all human philosophies is not accidental because it does reflect the goal of humanity.” Although the film never discloses Kurzweil’s religious beliefs (he was raised by Jewish parents as a Unitarian Universalist), in a (presumably) unintentionally humorous moment that ends the film Kurzweil reflects on the God question and answers it himself: “Does God exist? I would say, ‘Not yet.’”
  • Transcendent Man is Barry Ptolemy’s beautifully crafted and artfully edited documentary film about Kurzweil and his quest to save humanity.
  • Transcendent Man pulls viewers in through Kurzweil’s visage of a future in which we merge with our machines and vastly extend our longevity and intelligence to the point where even death will be defeated. This point is what Kurzweil calls the “singularity” (inspired by the physics term denoting the infinitely dense point at the center of a black hole), and he arrives at the 2029 date by extrapolating curves based on what he calls the “law of accelerating returns.” This is “Moore’s Law” (the doubling of computing power every year) on steroids, applied to every conceivable area of science, technology and economics.
  • ...6 more annotations...
  • Ptolemy’s portrayal of Kurzweil is unmistakably positive, but to his credit he includes several critics from both religion and science. From the former, a radio host named Chuck Missler, a born-again Christian who heads the Koinonia Institute (“dedicated to training and equipping the serious Christian to sojourn in today’s world”), proclaims: “We have a scenario laid out that the world is heading for an Armageddon and you and I are going to be the generation that’s alive that is going to see all this unfold.” He seems to be saying that Kurzweil is right about the second coming, but wrong about what it is that is coming. (Of course, Missler’s prognostication is the N+1 failed prophecy that began with Jesus himself, who told his followers (Mark 9:1): “Verily I say unto you, That there be some of them that stand here, which shall not taste of death, till they have seen the kingdom of God come with power.”) Another religiously-based admonition comes from the Stanford University neuroscientist William Huribut, who self-identifies as a “practicing Christian” who believes in immortality, but not in the way Kurzweil envisions it. “Death is conquered spiritually,” he pronounced.
  • On the science side of the ledger, Neil Gershenfeld, director of the Center for Bits and Atoms at the Massachusetts Institute of Technology, sagely notes: “What Ray does consistently is to take a whole bunch of steps that everybody agrees on and take principles for extrapolating that everybody agrees on and show they lead to things that nobody agrees on.” Likewise, the estimable futurist Kevin Kelly, whose 2010 book What Technology Wants paints a much more realistic portrait of what our futures may (or may not) hold
  • Kelly agrees that Kurzweil’s exponential growth curves are accurate but that the conclusions and especially the inspiration drawn from them are not. “He seems to have no doubts about it and in this sense I think he is a prophetic type figure who is completely sure and nothing can waiver his absolute certainty about this. So I would say he is a modern day prophet…that’s wrong.”
  • Transcendent Man is clearly meant to be an uplifting film celebrating all the ways science and technology have and are going to enrich our lives.
  • An especially lachrymose moment is when Kurzweil is rifling through his father’s journals and documents in a storage room dedicated to preserving his memory until the day that all this “data” (including Ray’s own fading memories) can be reconfigured into an A.I. simulacrum so that father and son can be reunited.
  • Although Kurzweil says he is optimistic and cheery about life, he can’t seem to stop talking about death: “It’s such a profoundly sad, lonely feeling that I really can’t bear it,” he admits. “So I go back to thinking about how I’m not going to die.” One wonders how much of life he is missing by over thinking death, or how burdensome it must surely be to imbibe over 200 supplement tables a day and have your blood tested and cleansed every couple of months, all in an effort to reprogram the body’s biochemistry.
Weiye Loh

Technology and Inequality - Kenneth Rogoff - Project Syndicate - 0 views

  • it is easy to forget that market forces, if allowed to play out, might eventually exert a stabilizing role. Simply put, the greater the premium for highly skilled workers, the greater the incentive to find ways to economize on employing their talents.
  • one of the main ways to uncover cheating is by using a computer program to detect whether a player’s moves consistently resemble the favored choices of various top computer programs.
  • many other examples of activities that were once thought exclusively the domain of intuitive humans, but that computers have come to dominate. Many teachers and schools now use computer programs to scan essays for plagiarism
  • ...4 more annotations...
  • computer-grading of essays is a surging science, with some studies showing that computer evaluations are fairer, more consistent, and more informative than those of an average teacher, if not necessarily of an outstanding one.
  • the relative prices of grains, metals, and many other basic goods tended to revert to a central mean tendency over sufficiently long periods. We conjectured that even though random discoveries, weather events, and technologies might dramatically shift relative values for certain periods, the resulting price differentials would create incentives for innovators to concentrate more attention on goods whose prices had risen dramatically.
  • people are not goods, but the same principles apply. As skilled labor becomes increasingly expensive relative to unskilled labor, firms and businesses have a greater incentive to find ways to “cheat” by using substitutes for high-price inputs. The shift might take many decades, but it also might come much faster as artificial intelligence fuels the next wave of innovation.
  • Many commentators seem to believe that the growing gap between rich and poor is an inevitable byproduct of increasing globalization and technology. In their view, governments will need to intervene radically in markets to restore social balance. I disagree. Yes, we need genuinely progressive tax systems, respect for workers’ rights, and generous aid policies on the part of rich countries. But the past is not necessarily prologue: given the remarkable flexibility of market forces, it would be foolish, if not dangerous, to infer rising inequality in relative incomes in the coming decades by extrapolating from recent trends.
  •  
    Until now, the relentless march of technology and globalization has played out hugely in favor of high-skilled labor, helping to fuel record-high levels of income and wealth inequality around the world. Will the endgame be renewed class warfare, with populist governments coming to power, stretching the limits of income redistribution, and asserting greater state control over economic life?
Jude John

Democracy 2.0 Awaits an Upgrade - 3 views

http://www.nytimes.com/2009/09/12/world/americas/12iht-currents.html 1. "President Obama declared during the campaign that "we are the ones we've been waiting for." That messianic phrase held the ...

democrcacy technology

started by Jude John on 14 Sep 09 no follow-up yet
Weiye Loh

Skepticblog » Investing in Basic Science - 0 views

  • A recent editorial in the New York Times by Nicholas Wade raises some interesting points about the nature of basic science research – primarily that its’ risky.
  • As I have pointed out about the medical literature, researcher John Ioaniddis has explained why most published studies turn out in retrospect to be wrong. The same is true of most basic science research – and the underlying reason is the same. The world is complex, and most of our guesses about how it might work turn out to be either flat-out wrong, incomplete, or superficial. And so most of our probing and prodding of the natural world, looking for the path to the actual answer, turn out to miss the target.
  • research costs considerable resources of time, space, money, opportunity, and people-hours. There may also be some risk involved (such as to subjects in the clinical trial). Further, negative studies are actually valuable (more so than terrible pictures). They still teach us something about the world – they teach us what is not true. At the very least this narrows the field of possibilities. But the analogy holds in so far as the goal of scientific research is to improve our understanding of the world and to provide practical applications that make our lives better. Wade writes mostly about how we fund research, and this relates to our objectives. Most of the corporate research money is interested in the latter – practical (and profitable) applications. If this is your goal, than basic science research is a bad bet. Most investments will be losers, and for most companies this will not be offset by the big payoffs of the rare winners. So many companies will allow others to do the basic science (government, universities, start up companies) then raid the winners by using their resources to buy them out, and then bring them the final steps to a marketable application. There is nothing wrong or unethical about this. It’s a good business model.
  • ...8 more annotations...
  • What, then, is the role of public (government) funding of research? Primarily, Wade argues (and I agree), to provide infrastructure for expensive research programs, such as building large colliders.
  • the more the government invests in basic science and infrastructure, the more winners will emerge that private industry can then capitalize on. This is a good way to build a competitive dynamic economy.
  • But there is a pitfall – prematurely picking winners and losers. Wade give the example of California investing specifically into developing stem cell treatments. He argues that stem cells, while promising, do not hold a guarantee of eventual success, and perhaps there are other technologies that will work and are being neglected. The history of science and technology has clearly demonstrated that it is wickedly difficult to predict the future (and all those who try are destined to be mocked by future generations with the benefit of perfect hindsight). Prematurely committing to one technology therefore contains a high risk of wasting a great deal of limited resources, and missing other perhaps more fruitful opportunities.
  • The underlying concept is that science research is a long-term game. Many avenues of research will not pan out, and those that do will take time to inspire specific applications. The media, however, likes catchy headlines. That means when they are reporting on basic science research journalists ask themselves – why should people care? What is the application of this that the average person can relate to? This seems reasonable from a journalistic point of view, but with basic science reporting it leads to wild speculation about a distant possible future application. The public is then left with the impression that we are on the verge of curing the common cold or cancer, or developing invisibility cloaks or flying cars, or replacing organs and having household robot servants. When a few years go by and we don’t have our personal android butlers, the public then thinks that the basic science was a bust, when in fact there was never a reasonable expectation that it would lead to a specific application anytime soon. But it still may be on track for interesting applications in a decade or two.
  • this also means that the government, generally, should not be in the game of picking winners an losers – putting their thumb on the scale, as it were. Rather, they will get the most bang for the research buck if they simply invest in science infrastructure, and also fund scientists in broad areas.
  • The same is true of technology – don’t pick winners and losers. The much-hyped “hydrogen economy” comes to mind. Let industry and the free market sort out what will work. If you have to invest in infrastructure before a technology is mature, then at least hedge your bets and keep funding flexible. Fund “alternative fuel” as a general category, and reassess on a regular basis how funds should be allocated. But don’t get too specific.
  • Funding research but leaving the details to scientists may be optimal
  • The scientific community can do their part by getting better at communicating with the media and the public. Try to avoid the temptation to overhype your own research, just because it is the most interesting thing in the world to you personally and you feel hype will help your funding. Don’t make it easy for the media to sensationalize your research – you should be the ones trying to hold back the reigns. Perhaps this is too much to hope for – market forces conspire too much to promote sensationalism.
Weiye Loh

Geeks at the Beach: 10 Summer Reads About Technology and Your Life - Technology - The C... - 0 views

  • we're so excited about checking e-mail and Facebook that we're neglecting face-to-face relationships, but that it's not too late to make some "corrections" to our high-tech habits. It's time to turn off the BlackBerry for a few minutes and set some ground rules for blending cyberspace with personal space.
  • examples such as Wikipedia and a ride-sharing Web site as proof that "the harnessing of our cognitive surplus allows people to behave in increasingly generous, public, and social ways."
  • the transformative potential of the Internet, as more people use their free time in active, collaborative projects rather than watching television.
  • ...5 more annotations...
  • Mr. Vaidhyanathan, a professor of media studies and law at the University of Virginia and frequent contributor to The Chronicle Review, reminds readers that they aren't consumers of Google's offerings. Rather, their use of Google's services is the product it sells to advertisers. Both books look at the continuing evolution of the Google Books settlement as a key test of how far the company's reach could extend and a sign of how the perception of Google has changed from that of scrappy upstart with a clever motto, "Don't be evil," to global behemoth accused by some of being just that.
  • Is the Internet on its way to getting monopolized? That question underlies Tim Wu's The Master Switch. The eccentric Columbia Law School professor—he's known to dress up as a blue bear at the annual Burning Man festival—recounts how ruthless companies consolidated their power over earlier information industries like the telephone, radio, and film. So which tech giant seems likely to grab control of the net?
  • it feels like we're perpetually on the verge of a tipping point, when e-books will overtake print books as a source of revenue for publishers. John B. Thompson, a sociologist at the University of Cambridge, analyzes the inner workings of the contemporary trade-publishing industry. (He did the same for scholarly publishing in an earlier work, Books in the Digital Age.) Mr. Thompson examines the roles played by agents, editors, and authors as well as differences among small, medium, and large publishing operations, and he probes under the surface of the great digital shift. We're too hung up on the form of the book, he argues: "A revolution has taken place in publishing, but it is a revolution in the process rather than a revolution in the product."
  • technology is actually doing far more to bolster authoritarian regimes than to overturn them, writes Evgeny Morozov in this sharp reality check on the media-fueled notion that information is making everybody free. Mr. Morozov, a visiting scholar at Stanford University, points out that the Iranian government posted "most wanted" pictures of protesters on the Web, leading to several arrests. The Muslim Brotherhood blogs actively in Egypt. And China pays people to make pro-authority statements on the Internet, paying a few cents for each endorsement. The Twitter revolution, in this book, is "overblown and completely unsubstantiated rhetoric."
  • Internet is rewiring our brains and short-circuiting our ability to think. And that has big consequences for teaching, he told The Chronicle last year: "The assumption that the more media, the more messaging, the more social networking you can bring in will lead to better educational outcomes is not only dubious but in many cases is probably just wrong."
Weiye Loh

Technology and Anarchy » IAI TV - 0 views

  •  
    With a 3D printer and laptop, does everyone have the tools they need to build a bio-weapon? Science fiction novelist, blogger and activist Cory Doctorow talks to Nigel Warburton about whether we can - or should - attempt to regulate subversive technology.
lo sokwan

Scientists decode human genome's instruction manual - 0 views

  •  
    I'm pretty disturbed that there is now a genetic formula to "make" healthy humans. Though it sounds pretty cool that future human beings can be 'perfectly healthy', but at the same time, it is pretty weird to imagine a world without illnesses. Could this lead to a commodification of human beings? If it is only available to the wealthy or the elite groups,is it an ethical technology?
  •  
    This is interesting. I think that the technology itself is neutral. Yes, it does open up options that pushes our boundary of what we consider ethical. But eventually, it is how humans use the technology that makes it ethical or unethical. Personally, I think that if this works out, it will definitely be only available to the wealthy and elite as they are the ones that have more means to access the technology. Just something to think about, expensive medication is also more accessible and available to the wealthy and elite. Then is it ethical then to manufacture expensive medication? haha just some thoughts:)
  •  
    I think the issue with genetic research is that it legalizes the (scientific) claim of eugenics, perhaps when taken to the extreme allows for some kind of Nazi style ethnic cleansing. Arthur Kroker wrote a pretty interesting, albeit rather doomsday prophetic account on this topic. I do not agree fully with him but I like the way he writes (rather enigmatic and seductive), about how science and the human genome project has managed to immunize itself from the overt fascism of second-wave eugenics of National Socialism. The book is available in the library "The will to technology and the culture of nihilism". It'll be nice to know what you all think about it. Do you think that such science will one day turn against humans who are deemed to be lesser human simply because they have 'bad' genes?
Weiye Loh

The importance of culture change in open government | Government In The Lab - 0 views

  • Open government cannot succeed through technology only.  Open data, ideation platforms, cloud solutions, and social media are great tools but when they are used to deliver government services using existing models they can only deliver partial value, value which can not be measured and value that is unclear to anyone but the technology practitioners that are delivering the services.
  • It is this thinking that has led a small group of us to launch a new Group on Govloop called Culture Change and Open Government.  Bill Brantley wrote a great overview of the group which notes that “The purpose of this group is to create an international community of practice devoted to discussing how to use cultural change to bring about open government and to use this site to plan and stage unconferences devoted to cultural change“
  • “Open government is a citizen-centric philosophy and strategy that believes the best results are usually driven by partnerships between citizens and government, at all levels. It is focused entirely on achieving goals through increased efficiency, better management, information transparency, and citizen engagement and most often leverages newer technologies to achieve the desired outcomes. This is bringing business approaches, business technologies, to government“.
  •  
    open government has primarily been the domain of the technologist.  Other parts of the organization have not been considered, have not been educated, have not been organized around a new way of thinking, a new way of delivering value.  The organizational model, the culture itself, has not been addressed, the value of open government is not understood, it is not measurable, and it is not an approach that the majority of those in and around government have bought into.
Weiye Loh

The Death of Postmodernism And Beyond | Philosophy Now - 0 views

  • Most of the undergraduates who will take ‘Postmodern Fictions’ this year will have been born in 1985 or after, and all but one of the module’s primary texts were written before their lifetime. Far from being ‘contemporary’, these texts were published in another world, before the students were born: The French Lieutenant’s Woman, Nights at the Circus, If on a Winter’s Night a Traveller, Do Androids Dream of Electric Sheep? (and Blade Runner), White Noise: this is Mum and Dad’s culture. Some of the texts (‘The Library of Babel’) were written even before their parents were born. Replace this cache with other postmodern stalwarts – Beloved, Flaubert’s Parrot, Waterland, The Crying of Lot 49, Pale Fire, Slaughterhouse 5, Lanark, Neuromancer, anything by B.S. Johnson – and the same applies. It’s all about as contemporary as The Smiths, as hip as shoulder pads, as happening as Betamax video recorders. These are texts which are just coming to grips with the existence of rock music and television; they mostly do not dream even of the possibility of the technology and communications media – mobile phones, email, the internet, computers in every house powerful enough to put a man on the moon – which today’s undergraduates take for granted.
  • somewhere in the late 1990s or early 2000s, the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them.
  • Postmodernism, like modernism and romanticism before it, fetishised [ie placed supreme importance on] the author, even when the author chose to indict or pretended to abolish him or herself. But the culture we have now fetishises the recipient of the text to the degree that they become a partial or whole author of it. Optimists may see this as the democratisation of culture; pessimists will point to the excruciating banality and vacuity of the cultural products thereby generated (at least so far).
  • ...17 more annotations...
  • Pseudo-modernism also encompasses contemporary news programmes, whose content increasingly consists of emails or text messages sent in commenting on the news items. The terminology of ‘interactivity’ is equally inappropriate here, since there is no exchange: instead, the viewer or listener enters – writes a segment of the programme – then departs, returning to a passive role. Pseudo-modernism also includes computer games, which similarly place the individual in a context where they invent the cultural content, within pre-delineated limits. The content of each individual act of playing the game varies according to the particular player.
  • The pseudo-modern cultural phenomenon par excellence is the internet. Its central act is that of the individual clicking on his/her mouse to move through pages in a way which cannot be duplicated, inventing a pathway through cultural products which has never existed before and never will again. This is a far more intense engagement with the cultural process than anything literature can offer, and gives the undeniable sense (or illusion) of the individual controlling, managing, running, making up his/her involvement with the cultural product. Internet pages are not ‘authored’ in the sense that anyone knows who wrote them, or cares. The majority either require the individual to make them work, like Streetmap or Route Planner, or permit him/her to add to them, like Wikipedia, or through feedback on, for instance, media websites. In all cases, it is intrinsic to the internet that you can easily make up pages yourself (eg blogs).
  • Where once special effects were supposed to make the impossible appear credible, CGI frequently [inadvertently] works to make the possible look artificial, as in much of Lord of the Rings or Gladiator. Battles involving thousands of individuals have really happened; pseudo-modern cinema makes them look as if they have only ever happened in cyberspace.
  • Similarly, television in the pseudo-modern age favours not only reality TV (yet another unapt term), but also shopping channels, and quizzes in which the viewer calls to guess the answer to riddles in the hope of winning money.
  • The purely ‘spectacular’ function of television, as with all the arts, has become a marginal one: what is central now is the busy, active, forging work of the individual who would once have been called its recipient. In all of this, the ‘viewer’ feels powerful and is indeed necessary; the ‘author’ as traditionally understood is either relegated to the status of the one who sets the parameters within which others operate, or becomes simply irrelevant, unknown, sidelined; and the ‘text’ is characterised both by its hyper-ephemerality and by its instability. It is made up by the ‘viewer’, if not in its content then in its sequence – you wouldn’t read Middlemarch by going from page 118 to 316 to 401 to 501, but you might well, and justifiably, read Ceefax that way.
  • A pseudo-modern text lasts an exceptionally brief time. Unlike, say, Fawlty Towers, reality TV programmes cannot be repeated in their original form, since the phone-ins cannot be reproduced, and without the possibility of phoning-in they become a different and far less attractive entity.
  • If scholars give the date they referenced an internet page, it is because the pages disappear or get radically re-cast so quickly. Text messages and emails are extremely difficult to keep in their original form; printing out emails does convert them into something more stable, like a letter, but only by destroying their essential, electronic state.
  • The cultural products of pseudo-modernism are also exceptionally banal
  • Much text messaging and emailing is vapid in comparison with what people of all educational levels used to put into letters.
  • A triteness, a shallowness dominates all.
  • In music, the pseudo-modern supersedingof the artist-dominated album as monolithic text by the downloading and mix-and-matching of individual tracks on to an iPod, selected by the listener, was certainly prefigured by the music fan’s creation of compilation tapes a generation ago. But a shift has occurred, in that what was a marginal pastime of the fan has become the dominant and definitive way of consuming music, rendering the idea of the album as a coherent work of art, a body of integrated meaning, obsolete.
  • To a degree, pseudo-modernism is no more than a technologically motivated shift to the cultural centre of something which has always existed (similarly, metafiction has always existed, but was never so fetishised as it was by postmodernism). Television has always used audience participation, just as theatre and other performing arts did before it; but as an option, not as a necessity: pseudo-modern TV programmes have participation built into them.
  • Whereas postmodernism called ‘reality’ into question, pseudo-modernism defines the real implicitly as myself, now, ‘interacting’ with its texts. Thus, pseudo-modernism suggests that whatever it does or makes is what is reality, and a pseudo-modern text may flourish the apparently real in an uncomplicated form: the docu-soap with its hand-held cameras (which, by displaying individuals aware of being regarded, give the viewer the illusion of participation); The Office and The Blair Witch Project, interactive pornography and reality TV; the essayistic cinema of Michael Moore or Morgan Spurlock.
  • whereas postmodernism favoured the ironic, the knowing and the playful, with their allusions to knowledge, history and ambivalence, pseudo-modernism’s typical intellectual states are ignorance, fanaticism and anxiety
  • pseudo-modernism lashes fantastically sophisticated technology to the pursuit of medieval barbarism – as in the uploading of videos of beheadings onto the internet, or the use of mobile phones to film torture in prisons. Beyond this, the destiny of everyone else is to suffer the anxiety of getting hit in the cross-fire. But this fatalistic anxiety extends far beyond geopolitics, into every aspect of contemporary life; from a general fear of social breakdown and identity loss, to a deep unease about diet and health; from anguish about the destructiveness of climate change, to the effects of a new personal ineptitude and helplessness, which yield TV programmes about how to clean your house, bring up your children or remain solvent.
  • Pseudo-modernism belongs to a world pervaded by the encounter between a religiously fanatical segment of the United States, a largely secular but definitionally hyper-religious Israel, and a fanatical sub-section of Muslims scattered across the planet: pseudo-modernism was not born on 11 September 2001, but postmodernism was interred in its rubble.
  • pseudo-modernist communicates constantly with the other side of the planet, yet needs to be told to eat vegetables to be healthy, a fact self-evident in the Bronze Age. He or she can direct the course of national television programmes, but does not know how to make him or herself something to eat – a characteristic fusion of the childish and the advanced, the powerful and the helpless. For varying reasons, these are people incapable of the “disbelief of Grand Narratives” which Lyotard argued typified postmodernists
  •  
    Postmodern philosophy emphasises the elusiveness of meaning and knowledge. This is often expressed in postmodern art as a concern with representation and an ironic self-awareness. And the argument that postmodernism is over has already been made philosophically. There are people who have essentially asserted that for a while we believed in postmodern ideas, but not any more, and from now on we're going to believe in critical realism. The weakness in this analysis is that it centres on the academy, on the practices and suppositions of philosophers who may or may not be shifting ground or about to shift - and many academics will simply decide that, finally, they prefer to stay with Foucault [arch postmodernist] than go over to anything else. However, a far more compelling case can be made that postmodernism is dead by looking outside the academy at current cultural production.
Weiye Loh

Drone journalism takes off - ABC News (Australian Broadcasting Corporation) - 0 views

  • Instead of acquiring military-style multi-million dollar unmanned aerial vehicles the size of small airliners, the media is beginning to go micro, exploiting rapid advances in technology by deploying small toy-like UAVs to get the story.
  • Last November, drone journalism hit the big time after a Polish activist launched a small craft with four helicopter-like rotors called a quadrocopter. He flew the drone low over riot police lines to record a violent demonstration in Warsaw. The pictures were extraordinarily different from run-of-the-mill protest coverage.Posted online, the images went viral. More significantly, this birds-eye view clip found its way onto the bulletins and web pages of mainstream media.
  • Drone Journalism Lab, a research project to determine the viability of remote airborne media.
  •  
    Drones play an increasing and controversial role in modern warfare. From Afghanistan and Pakistan to Iran and Yemen, they have become a ubiquitous symbol of Washington's war on terrorism. Critics point to the mounting drone-induced death toll as evidence that machines, no matter how sophisticated, cannot discriminate between combatants and innocent bystanders. Now drones are starting to fly into a more peaceful, yet equally controversial role in the media. Rapid technological advances in low-cost aerial platforms herald the age of drone journalism. But it will not be all smooth flying: this new media tool can expect to be buffeted by the issues of safety, ethics and legality.
Jody Poh

BBC NEWS | Technology | Defamation lawsuit for US tweeter - 0 views

  •  
    This news story is about Horizon realty suing a woman called Amanda Bonnen for defamation on Twitter. Amanda Bonnen has micro blogged her feelings towards her apartment on Twitter. She was unhappy with the mould she found in her apartment. This has stirred a response from Horizon realty as it sees the comment she made as false. Also as Twitter is such a widespread network, the company sees that it has to protect its reputation online. Thus, they have decided to sue Amanda Bonnen. Ms Bonnen has already recently moved out of the apartment and has been unavailable to comment on the lawsuit. Her Twitter account has also been deleted. Ethical question: I think many consider posting complaints and comments on Twitter similar to complaining to or having a conversation with friends over coffee. If this is the case, is it ethical or 'right' to be allowed to sue people like Amanda Bonnen? Ethical problem: This case brings up the point of the freedom of speech in public and private spaces. What are the boundaries and definitions of public and private space with the rise of new technologies such as Twitter? On what space (public or private) is Twitter then operating on and how much freedom of speech is allowed?
Weiye Loh

Does patent/ copyright stifle or promote innovation? - 6 views

From a Critical Ethic perspective, Who do patents and copyrights protect? What kind of ideologies underly such a policy? I would argue that it is the capitalist ideologies, individualist ideolo...

MS Word patent copyright

Li-Ling Gan

Bridging the Digital Divide - 6 views

http://www.riverdeep.net/current/2002/01/011402t_divide.jhtml This article essentially explains the concept of a digital divide, provides some statistics of this issue in the United States and bri...

digital divide

started by Li-Ling Gan on 07 Oct 09 no follow-up yet
Weiye Loh

Paul Crowley's Blog - A survey of anti-cryonics writing - 0 views

  • cryonics offers almost eternal life. To its critics, cryonics is pseudoscience; the idea that we could freeze someone today in such a way that future technology might be able to re-animate them is nothing more than wishful thinking on the desire to avoid death. Many who battle nonsense dressed as science have spoken out against it: see for example Nano Nonsense and Cryonics, a 2001 article by celebrated skeptic Michael Shermer; or check the Skeptic’s Dictionary or Quackwatch entries on the subject, or for more detail read the essay Cryonics–A futile desire for everlasting life by “Invisible Flan”.
  • And of course the pro-cryonics people have written reams and reams of material such as Ben Best’s Scientific Justification of Cryonics Practice on why they think this is exactly as plausible as I might think, and going into tremendous technical detail setting out arguments for its plausibility and addressing particular difficulties. It’s almost enough to make you want to sign up on the spot. Except, of course, that plenty of totally unscientific ideas are backed by reams of scientific-sounding documents good enough to fool non-experts like me. Backed by the deep pockets of the oil industry, global warming denialism has produced thousands of convincing-sounding arguments against the scientific consensus on CO2 and AGW. T
  • Nano Nonsense and Cryonics goes for the nitty-gritty right away in the opening paragraph:To see the flaw in this system, thaw out a can of frozen strawberries. During freezing, the water within each cell expands, crystallizes, and ruptures the cell membranes. When defrosted, all the intracellular goo oozes out, turning your strawberries into runny mush. This is your brain on cryonics.This sounds convincing, but doesn’t address what cryonicists actually claim. Ben Best, President and CEO of the Cryonics Institute, replies in the comments:Strawberries (and mammalian tissues) are not turned to mush by freezing because water expands and crystallizes inside the cells. Water crystallizes in the extracellular space because more nucleators are found extracellularly. As water crystallizes in the extracellular space, the extracellular salt concentration increases causing cells to lose water osmotically and shrink. Ultimately the cell membranes are broken by crushing from extracellular ice and/or high extracellular salt concentration. […] Cryonics organizations use vitrification perfusion before cooling to cryogenic temperatures. With good brain perfusion, vitrification can reduce ice formation to negligible amounts.
  • ...6 more annotations...
  • The Skeptic’s Dictionary entry is no advance. Again, it refers erroneously to a “mushy brain”. It points out that the technology to reanimate those in storage does not already exist, but provides no help for us non-experts in assessing whether it is a plausible future technology, like super-fast computers or fusion power, or whether it is as crazy as the sand-powered tank; it simply asserts baldly and to me counterintuitively that it is the latter. Again, perhaps cryonic reanimation is a sand-powered tank, but I can explain to you why a sand-powered tank is implausible if you don’t already know, and if cryonics is in the same league I’d appreciate hearing the explanation.
  • Another part of the article points out the well-known difficulties with whole-body freezing — because the focus is on achieving the best possible preservation of the brain, other parts suffer more. But the reason why the brain is the focus is that you can afford to be a lot bolder in repairing other parts of the body — unlike the brain, if my liver doesn’t survive the freezing, it can be replaced altogether.
  • Further, the article ignores one of the most promising possibilities for reanimation, that of scanning and whole-brain emulation, a route that requires some big advances in computer and scanning technology as well as our understanding of the lowest levels of the brain’s function, but which completely sidesteps any problems with repairing either damage from the freezing process or whatever it was that led to legal death.
  • Sixteen years later, it seems that hasn’t changed; in fact, as far as the issue of technical feasability goes it is starting to look as if on all the Earth, or at least all the Internet, there is not one person who has ever taken the time to read and understand cryonics claims in any detail, still considers it pseudoscience, and has written a paper, article or even a blog post to rebut anything that cryonics advocates actually say. In fact, the best of the comments on my first blog post on the subject are already a higher standard than anything my searches have turned up.
  • I don’t have anything useful to add, I just wanted to say that I feel exactly as you do about cryonics and living forever. And I thought that this statement: I know that I don’t know enough to judge. shows extreme wisdom. If only people wishing to comment on global warming would apply the same test.
  • WRT global warming, the mistake people make is trying to go direct to the first-order evidence, which is much too complicated and too easy to misrepresent to hope to directly interpret unless you make it your life’s work, and even then only in a particular area. The correct thing to do is to collect second-order evidence, such as that every major scientific academy has backed the IPCC.
    • Weiye Loh
       
      First-order evidence vs second-order evidence...
  •  
    Cryonics
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

To Die of Having Lived: an article by Richard Rapport | The American Scholar - 0 views

  • Although it may be a form of arrogance to attempt the management of one’s own death, is it better to surrender that management to the arrogance of someone else? We know we can’t avoid dying, but perhaps we can avoid dying badly.
  • Dodging a bad death has become more complicated over the past 30 or 40 years. Before the advent of technological creations that permit vital functions to be sustained so well artificially, medical ethics were less obstructed by abstract definitions of death.
  • generally agreed upon criteria for brain death have simplified some of these confusions, but they have not solved them. The broad middle ground between our usual health and consciousness as the expected norm on the one hand, and clear death of the brain on the other, lacks certainty.
    • Weiye Loh
       
      Isn't it always the case? That dichotomous relationships aren't clearly and equally demarcated but some how we attempt to split them up... through polemical discourses and rhetorics...
  • ...13 more annotations...
  • Doctors and other health-care workers can provide patients and families with probabilities for improvement or recovery, but statistics are hardly what is wanted. Even after profound injury or the diagnosis of an illness that statistically is nearly certain to be fatal, what people hear is the word nearly. How do we not allow the death of someone who might be saved? How do we avoid the equally intolerable salvation of a clinically dead person?
    • Weiye Loh
       
      In what situations do we hear the word "nearly" and in what situations do we hear the word "certain"? When we're dealing with a person's life, we hear "nearly", but when we're dealing with climate science we hear "certain"? 
  • Injecting political agendas into these end-of-life complexities only confuses the problem without providing a solution.
  • The questions are how, when, and on whose terms we depart. It is curious that people might be convinced to avoid confronting death while they are healthy, and that society tolerates ad hominem arguments that obstruct rational debate over an authentic problem of ethics in an uncertain world.
  • Any seriously ill older person who winds up in a modern CCU immediately yields his autonomy. Even if the doctors, nurses, and staff caring for him are intelligent, properly educated, humanistically motivated, and correct in the diagnosis, they are manipulated not only by the tyranny of technology but also by the rules established in their hospital. In addition, regulations of local and state licensing agencies and the federal government dictate the parameters of what the hospital workers do and how they do it, and every action taken is heavily influenced by legal experts committed to their client’s best interest—values frequently different from the patient’s. Once an acutely ill patient finds himself in this situation, everything possible will be done to save him; he is in no position to offer an opinion.
  • Eventually, after hours or days (depending on the illness and who is involved in the care), the wisdom of continuing treatment may come into question. But by then the patient will likely have been intubated and placed on a ventilator, a feeding tube may have been inserted, a catheter placed in the bladder, IVs started in peripheral veins or threaded through a major blood vessel near the heart, and monitors attached to record an EKG, arterial blood pressure, temperature, respirations, oxygen saturation, even pressure inside the skull. Sequential pressure devices will have been wrapped around the legs. All the digital marvels have alarms, so if one isn’t working properly, an annoying beep, like the sound of a backing truck, will fill the patient’s room. Vigilant nurses will add drugs by the dozens to the IV or push them into ports. Families will hover uncertainly. Meanwhile, tens and perhaps hundreds of thousands of dollars will have been transferred from one large corporation—an insurer of some kind—to another large corporation—a health care delivery system of some kind.
    • Weiye Loh
       
      Perhaps then, the value of life is not so much life in itself per se, but rather the transactive amount it generates. 
  • While the expense of the drugs, manpower, and technology required to make a diagnosis and deliver therapy does sop up resources and thereby deny treatment that might be more fruitful for others, including the 46.3 million Americans who, according to the Census Bureau, have no health insurance, that isn’t the real dilemma of the critical care unit.
  • the problem isn’t getting into or out of a CCU; the predicament is in knowing who should be there in the first place.
  • Before we become ill, we tend to assume that everything can be treated and treated successfully. The prelate in Willa Cather’s Death Comes for the Archbishop was wiser. Approaching the end, he said to a younger priest, “I shall not die of a cold, my son. I shall die of having lived.”
  • best way to avoid unwanted admission to a critical care unit at or near the end of life is to write an advance directive (a living will or durable power of attorney for health care) when healthy.
  • , not many people do this and, more regrettably, often the document is not included in the patient’s chart or it goes unnoticed.
  • Since we are sure to die of having lived, we should prepare for death before the last minute. Entire corporations are dedicated to teaching people how to retire well. All of their written materials, Web sites, and seminars begin with the same advice: start planning early. Shouldn’t we at least occasionally think about how we want to leave our lives?
  • Flannery O’Connor, who died young of systemic lupus, wrote, “Sickness before death is a very appropriate thing and I think those who don’t have it miss one of God’s mercies.”
  • Because we understand the metaphor of conflict so well, we are easily sold on the idea that we must resolutely fight against our afflictions (although there was once an article in The Onion titled “Man Loses Cowardly Battle With Cancer”). And there is a place to contest an abnormal metabolism, a mutation, a trauma, or an infection. But there is also a place to surrender. When the organs have failed, when the mind has dissolved, when the body that has faithfully housed us for our lifetime has abandoned us, what’s wrong with giving up?
  •  
    Spring 2010 To Die of Having Lived A neurological surgeon reflects on what patients and their families should and should not do when the end draws near
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

'Gay cure' Apple iPhone app: more than 80,000 complain | Technology | guardian.co.uk - 0 views

  • Ben Summerskill, chief executive of gay rights group Stonewall, said: "At Stonewall, we've all been on this app since 8am and we can assure your readers it's having absolutely no effect.
  • A new petition letter addressed to Steve Jobs, the Apple chief executive, posted on the Change.org site last week said: "Apple doesn't allow racist or anti-Semitic apps in its app store, yet it gives the green light to an app targeting vulnerable LGBT youth with the message that their sexual orientation is a 'sin that will make your heart sick' and a 'counterfeit'.
  • The technology giant is notoriously efficacious in deciding which apps it allows on to its popular iPhone and iPad handsets. Last year Apple withdrew a similar anti-gay iPhone app called Manhattan Declaration after Change.org, the online activism site, handed over an 8,000-strong petition.
Weiye Loh

Roger Pielke Jr.'s Blog: Germany's Burned Bridge - 0 views

  • The politics of Merkelism are based on two principles. The first is that, if the people want it, it must be right. The second is that whatever is useful to the people must also be useful to the chancellor.
  • I have quickly calculated the implications for carbon dioxide emissions of the German decision, based on a projection of the 2020 electricity mix from RWI as reported by the Financial Times.  These estimates are shown in the graph to the left.
  • Using these numbers and the simplified carbon dioxide intensities from The Climate Fix I calculate the carbon dioxide emissions from Germany electricity generation, assuming constant demand, will increase by 8% from 2011 to 2020. The Breakthrough Institute also runs some numbers.  See Reuters as well.
  •  
    In The Climate Fix I lauded Germany's forward-looking energy policies, in which they had decided to use the technologies of today as a resource from which to build a bridge to tomorrow's energy technology (German readers, please see this translated essay as well). Germany's government has now burned that bridge by announcing the phase-out of nuclear power by 2022.
‹ Previous 21 - 40 of 229 Next › Last »
Showing 20 items per page