Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Human

Rss Feed Group items tagged

Weiye Loh

BioCentre - 0 views

  • Humanity’s End. The main premise of the book is that proposals that would supposedly promise to make us smarter like never before or add thousands of years to our live seem rather far fetched and the domain of mere fantasy. However, it is these very proposals which form the basis of many of the ideas and thoughts presented by advocates of radical enhancement and which are beginning to move from the sidelines to the centre of main stream discussion. A variety of technologies and therapies are being presented to us as options to expand our capabilities and capacities in order for us to become something other than human.
  • Agar takes issue with this and argues against radical human enhancement. He structures his analysis and discussion by focusing on four key figures and their proposals which help to form the core of the case for radical enhancement debate.  First to be examined by Agar is Ray Kurzweil who argues that Man and Machine will become one as technology allows us to transcend our biology. Second, is Aubrey de Grey who is a passionate advocate and pioneer of anti-ageing therapies which allow us to achieve “longevity escape velocity”. Next is Nick Bostrom, a leading transhumanist who defends the morality and rationality of enhancement and finally James Hughes who is a keen advocate of a harmonious democracy of the enhanced and un-enhanced.
  • He avoids falling into any of the pitfalls of basing his argument solely upon the “playing God” question but instead seeks to posit a well founded argument in favour of the precautionary principle.
  • ...10 more annotations...
  • Agar directly tackles Hughes’ ideas of a “democratic transhumanism.” Here as post-humans and humans live shoulder to shoulder in wonderful harmony, all persons have access to the technologies they want in order to promote their own flourishing.  Under girding all of this is the belief that no human should feel pressurised to become enhance. Agar finds no comfort with this and instead can foresee a situation where it would be very difficult for humans to ‘choose’ to remain human.  The pressure to radically enhance would be considerable given the fact that the radically enhanced would no doubt be occupying the positions of power in society and would consider the moral obligation to utilise to the full enhancement techniques as being a moral imperative for the good of society.  For those who were able to withstand then a new underclass would no doubt emerge between the enhanced and the un-enhanced. This is precisely the kind of society which Hughes appears to be overly optimistic will not emerge but which is more akin to Lee Silver’s prediction of the future with the distinction made between the "GenRich" and the "naturals”.  This being the case, the author proposes that we have two options: radical enhancement is either enforced across the board or banned outright. It is the latter option which Agar favours but crucially does not elaborate further on so it is unclear as to how he would attempt such a ban given the complexity of the issue. This is disappointing as any general initial reflections which the author felt able to offer would have added to the discussion and added further strength to his line of argument.
  • A Transhuman Manifesto The final focus for Agar is James Hughes, who published his transhumanist manifesto Citizen Cyborg in 2004. Given the direct connection with politics and public policy this for me was a particularly interesting read. The basic premise to Hughes argument is that once humans and post humans recognise each other as citizens then this will mark the point at which they will be able to get along with each other.
  • Agar takes to task the argument Bostrom made with Toby Ord, concerning claims against enhancement. Bostrom and Ord argue that it boils down to a preference for the status quo; current human intellects and life spans are preferred and deemed best because they are what we have now and what we are familiar with (p. 134).  Agar discusses the fact that in his view, Bostrom falls into a focalism – focusing on and magnifying the positives whilst ignoring the negative implications.  Moreover, Agar goes onto develop and reiterate his earlier point that the sort of radical enhancements Bostrom et al enthusiastically support and promote take us beyond what is human so they are no longer human. It therefore cannot be said to be human enhancement given the fact that the traits or capacities that such enhancement afford us would be in many respects superior to ours, but they would not be ours.
  • With his law of accelerating returns and talk of the Singularity Ray Kurzweil proposes that we are speeding towards a time when our outdated systems of neurons and synapses will be traded for far more efficient electronic circuits, allowing us to become artificially super-intelligent and transferring our minds from brains into machines.
  • Having laid out the main ideas and thinking behind Kurzweil’s proposals, Agar makes the perceptive comment that despite the apparent appeal of greater processing power it would nevertheless be no longer human. Introducing chips to the human body and linking into the human nervous system to computers as per Ray Kurzweil’s proposals will prove interesting but it goes beyond merely creating a copy of us in order to that future replication and uploading can take place. Rather it will constitute something more akin to an upgrade. Electrochemical signals that the brain use to achieve thought travel at 100 metres per second. This is impressive but contrast this with the electrical signals in a computer which travel at 300 million metres per second then the distinction is clear. If the predictions are true how will such radically enhanced and empowered beings live not only the unenhanced but also what will there quality of life really be? In response, Agar favours something what he calls “rational biological conservatism” (pg. 57) where we set limits on how intelligent we can become in light of the fact that it will never be rational to us for human beings to completely upload their minds onto computers.
  • Agar then proceeds to argue that in the pursuit of Kurzweil enhanced capacities and capabilities we might accidentally undermine capacities of equal value. This line of argument would find much sympathy from those who consider human organisms in “ecological” terms, representing a profound interconnectedness which when interfered with presents a series of unknown and unexpected consequences. In other words, our specifies-specific form of intelligence may well be linked to species-specific form of desire. Thus, if we start building upon and enhancing our capacity to protect and promote deeply held convictions and beliefs then due to the interconnectedness, it may well affect and remove our desire to perform such activities (page 70). Agar’s subsequent discussion and reference to the work of Jerry Foder, philosopher and cognitive scientist is particularly helpful in terms of the functioning of the mind by modules and the implications of human-friendly AI verses human-unfriendly AI.
  • In terms of the author’s discussion of Aubrey de Grey, what is refreshing to read from the outset is the author’s clear grasp of Aubrey’s ideas and motivation. Some make the mistake of thinking he is the man who wants to live forever, when in actual fact this is not the case.  De Grey wants to reverse the ageing process - Strategies for Engineered Negligible Senescence (SENS) so that people are living longer and healthier lives. Establishing this clear distinction affords the author the opportunity to offer more grounded critiques of de Grey’s than some of his other critics. The author makes plain that de Grey’s immediate goal is to achieve longevity escape velocity (LEV), where anti-ageing therapies add years to life expectancy faster than age consumes them.
  • In weighing up the benefits of living significantly longer lives, Agar posits a compelling argument that I had not fully seen before. In terms of risk, those radically enhanced to live longer may actually be the most risk adverse and fearful people to live. Taking the example of driving a car, a forty year-old senescing human being who gets into their car to drive to work and is involved in a fatal accident “stands to lose, at most, a few healthy, youthful years and a slightly larger number of years with reduced quality” (p.116). In stark contrast should a negligibly senescent being who drives a car and is involved in an accident resulting in their death, stands to lose on average one thousand, healthy, youthful years (p.116).  
  • De Grey’s response to this seems a little flippant; with the end of ageing comes an increased sense of risk-aversion so the desire for risky activity such as driving will no longer be prevalent. Moreover, plus because we are living for longer we will not be in such a hurry to get to places!  Virtual reality comes into its own at this point as a means by which the negligibly senescent being ‘adrenaline junkie’ can be engaged with activities but without the associated risks. But surely the risk is part of the reason why they would want to engage in snow boarding, bungee jumping et al in the first place. De Grey’s strategy seemingly fails to appreciate the extent to which human beings want “direct” contact with the “real” world.
  • Continuing this idea further though, Agar’s subsequent discussion of the role of fire-fighters is an interesting one.  A negligibly senescent fire fighter may stand to loose more when they are trapped in a burning inferno but being negligibly senescent means that they are better fire-fighters by virtue of increase vitality. Having recently heard de Grey speak and had the privilege of discussing his ideas further with him, Agar’s discussion of De Grey were a particular highlight of the book and made for an engaging discussion. Whilst expressing concern and doubt in relation to De Grey’s ideas, Agar is nevertheless quick and gracious enough to acknowledge that if such therapies could be achieved then De Grey is probably the best person to comment on and achieve such therapies given the depth of knowledge and understanding that he has built up in this area.
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

Weiye Loh

"Asian Values": a credible alternative to a universal conception of human rig... - 0 views

  • Singapore has not ratified the International Covenant on Civil and Political Rights, but as a member state of the United Nations is bound to respect “fundamental human rights”. But who decides these rights? Many commentators will argue that they are those enshrined in the Universal Declaration on Human Rights, in which Freedom of Expression is guaranteed by Article 19.
  • The United Nations Human Rights Committee has stressed that freedom of expression ensures the free political debate essential to democracy[ii] and has expressed concern that overbearing government controls of the media are incompatible with Freedom of Expression.
  • The Singapore government’s view is different. They have long asserted that human rights principles and conceptions are dominated by Western perceptions and argue for an “Asian Values” interpretation of human rights. This has been characterised as the assertion of the primacy of duty to the community over individual rights and the expectation of trust in authority and dominance of the state leaders.
  • ...4 more annotations...
  • The “Asian Values” hypothesis is equally suspect. The UDHR recognises the universal applicability of human rights and any nation party to this treaty is not permitted to restrict rights purely on cultural, religious or political grounds.
  • “Asian governments are justified in restricting civil and political rights in some circumstances in favour of social stability and economic growth. Civil and political rights are immaterial when people are destitute and society is unstable.  Accordingly, as luxuries to be enjoyed once there is social order, civil and political liberties must be temporarily suspended so as to not inhibit the government’s delivery of economic and social necessities and so as to not threaten or destroy future development plans.” Whilst this argument may have been slightly more palatable if Singapore’s citizens were, in fact destitute, the reality is that Singapore is ranked as one of the world’s wealthiest countries and boasts a high life expectancy. Thus in Singapore’s case, arguments made in favour of a “liberty trade-off” are rendered completely untenable.
  • these cultural and religious justifications for violating rights are as unacceptable as Singapore’s purported assertion of an “Asian Values” conception of human rights. Even though the Singapore government’s language is more subtle, their arguments amount to same basic tenet: the purported justification of the denial of fundamental human rights, by reference to cultural, religious or political specific norms. Speaking recently in New York, the UN Secretary-General, Ban Ki Moon warned against such an interpretation of human rights:
  • “Yes, we recognize that social attitudes run deep.  Yes, social change often comes only with time.  Yet, let there be no confusion: where there is tension between cultural attitudes and universal human rights, universal human rights must carry the day. ” The universal and fundamental nature of human rights is the founding principle on which the United Nations was built: the right to freedom of expression must be guaranteed, “Asian Values” notwithstanding.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

The U.N. Declares Internet Access a Human Right - Technology - The Atlantic Wire - 0 views

  •  
    The United Nations counts internet access as a basic human right in a report that bears implications both to on-going events in the Arab Spring and to the Obama administration's war on whistleblowers. Acting as special rapporteur, a human rights watchdog role appointed by the UN Secretary General, Frank La Rue takes a hard line on the importance of the internet as "an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress." Presented to the General Assembly on Friday, La Rue's report comes as the capstone of a year's worth of meetings held between La Rue and local human rights organizations around the world, from Cairo to Bangkok. The report's introduction points to the impact of online collaboration in the Arab Spring and says that "facilitating access to the Internet for all individuals, with as little restriction to online content as possible, should be a priority for all States."
Weiye Loh

China calls out US human rights abuses: laptop searches, 'Net porn - 0 views

  • The report makes no real attempt to provide context to a huge selection of news articles about bad things happening in the US, piled up one against each other in almost random fashion.
  • As the UK's Guardian paper noted, "While some of the data cited in the report is derived from official or authoritative sources, other sections are composed from a mishmash of online material. One figure on crime rates is attributed to '10 Facts About Crime in the United States that Will Blow Your Mind, Beforitsnews.com'." The opening emphasis on US crime is especially odd; crime rates in the US are the lowest they have been in decades; the drop-off has been so dramatic that books have been written in attempts to explain it.
  • But the report does provide an interesting perspective on the US, especially when it comes to technology, and it's not all off base. China points to US laptop border searches as a problem (and they are): According to figures released by the American Civil Liberties Union (ACLU) in September 2010, more than 6,600 travelers had been subject to electronic device searches between October 1, 2008 and June 2, 2010, nearly half of them American citizens. A report on The Wall Street Journal on September 7, 2010, said the Department of Homeland Security (DHS) was sued over its policies that allegedly authorize the search and seizure of laptops, cellphones and other electronic devices without a reasonable suspicion of wrongdoing. The policies were claimed to leave no limit on how long the DHS can keep a traveler's devices or on the scope of private information that can be searched, copied, or detained. There is no provision for judicial approval or supervision. When Colombian journalist Hollman Morris sought a US student visa so he could take a fellowship for journalists at Harvard University, his application was denied on July 17, 2010, as he was ineligible under the "terrorist activities" section of the USA Patriot Act. An Arab American named Yasir Afifi, living in California, found the FBI attached an electronic GPS tracking device near the right rear wheel of his car.
  • ...2 more annotations...
  • China also sees hypocrisy in American discussions of Internet freedom. China comes in regularly for criticism over its "Great Firewall," but it suggests that the US government also restricts the Internet. While advocating Internet freedom, the US in fact imposes fairly strict restriction on cyberspace. On June 24, 2010, the US Senate Committee on Homeland Security and Governmental Affairs approved the Protecting Cyberspace as a National Asset Act, which will give the federal government "absolute power" to shut down the Internet under a declared national emergency. Handing government the power to control the Internet will only be the first step towards a greatly restricted Internet system, whereby individual IDs and government permission would be required to operate a website. The United States applies double standards on Internet freedom by requesting unrestricted "Internet freedom" in other countries, which becomes an important diplomatic tool for the United States to impose pressure and seek hegemony, and imposing strict restriction within its territory. An article on BBC on February 16, 2011 noted the US government wants to boost Internet freedom to give voices to citizens living in societies regarded as "closed" and questions those governments' control over information flow, although within its borders the US government tries to create a legal frame to fight the challenge posed by WikiLeaks. The US government might be sensitive to the impact of the free flow of electronic information on its territory for which it advocates, but it wants to practice diplomacy by other means, including the Internet, particularly the social networks. (The cyberspace bill never became law, and a revised version is still pending in Congress.)
  • Finally, there's pornography, which China bans. Pornographic content is rampant on the Internet and severely harms American children. Statistics show that seven in 10 children have accidentally accessed pornography on the Internet and one in three has done so intentionally. And the average age of exposure is 11 years old - some start at eight years old (The Washington Times, June 16, 2010). According to a survey commissioned by the National Campaign to Prevent Teen and Unplanned Pregnancy, 20 percent of American teens have sent or posted nude or seminude pictures or videos of themselves. (www.co.jefferson.co.us, March 23, 2010). At least 500 profit-oriented nude chat websites were set up by teens in the United States, involving tens of thousands of pornographic pictures.
  •  
    Upset over the US State Department's annual human rights report, China publishes a report of its own on various US ills. This year, it calls attention to America's border laptop searches, its attitude toward WikiLeaks, and the prevalence of online pornography. In case the report's purpose wasn't clear, China Foreign Ministry spokesman Hong Lei said this weekend, "We advise the US side to reflect on its own human rights issue, stop acting as a preacher of human rights as well as interfering in other countries' internal affairs by various means including issuing human rights reports."
Weiye Loh

Rationally Speaking: On Utilitarianism and Consequentialism - 0 views

  • Utilitarianism and consequentialism are different, yet closely related philosophical positions. Utilitarians are usually consequentialists, and the two views mesh in many areas, but each rests on a different claim
  • Utilitarianism's starting point is that we all attempt to seek happiness and avoid pain, and therefore our moral focus ought to center on maximizing happiness (or, human flourishing generally) and minimizing pain for the greatest number of people. This is both about what our goals should be and how to achieve them.
  • Consequentialism asserts that determining the greatest good for the greatest number of people (the utilitarian goal) is a matter of measuring outcome, and so decisions about what is moral should depend on the potential or realized costs and benefits of a moral belief or action.
  • ...17 more annotations...
  • first question we can reasonably ask is whether all moral systems are indeed focused on benefiting human happiness and decreasing pain.
  • Jeremy Bentham, the founder of utilitarianism, wrote the following in his Introduction to the Principles of Morals and Legislation: “When a man attempts to combat the principle of utility, it is with reasons drawn, without his being aware of it, from that very principle itself.”
  • Michael Sandel discusses this line of thought in his excellent book, Justice: What’s the Right Thing to Do?, and sums up Bentham’s argument as such: “All moral quarrels, properly understood, are [for Bentham] disagreements about how to apply the utilitarian principle of maximizing pleasure and minimizing pain, not about the principle itself.”
  • But Bentham’s definition of utilitarianism is perhaps too broad: are fundamentalist Christians or Muslims really utilitarians, just with different ideas about how to facilitate human flourishing?
  • one wonders whether this makes the word so all-encompassing in meaning as to render it useless.
  • Yet, even if pain and happiness are the objects of moral concern, so what? As philosopher Simon Blackburn recently pointed out, “Every moral philosopher knows that moral philosophy is functionally about reducing suffering and increasing human flourishing.” But is that the central and sole focus of all moral philosophies? Don’t moral systems vary in their core focuses?
  • Consider the observation that religious belief makes humans happier, on average
  • Secularists would rightly resist the idea that religious belief is moral if it makes people happier. They would reject the very idea because deep down, they value truth – a value that is non-negotiable.Utilitarians would assert that truth is just another utility, for people can only value truth if they take it to be beneficial to human happiness and flourishing.
  • . We might all agree that morality is “functionally about reducing suffering and increasing human flourishing,” as Blackburn says, but how do we achieve that? Consequentialism posits that we can get there by weighing the consequences of beliefs and actions as they relate to human happiness and pain. Sam Harris recently wrote: “It is true that many people believe that ‘there are non-consequentialist ways of approaching morality,’ but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality.”
  • we might wonder about the elasticity of words, in this case consequentialism. Do fundamentalist Christians and Muslims count as consequentialists? Is consequentialism so empty of content that to be a consequentialist one need only think he or she is benefiting humanity in some way?
  • Harris’ argument is that one cannot adhere to a certain conception of morality without believing it is beneficial to society
  • This still seems somewhat obvious to me as a general statement about morality, but is it really the point of consequentialism? Not really. Consequentialism is much more focused than that. Consider the issue of corporal punishment in schools. Harris has stated that we would be forced to admit that corporal punishment is moral if studies showed that “subjecting children to ‘pain, violence, and public humiliation’ leads to ‘healthy emotional development and good behavior’ (i.e., it conduces to their general well-being and to the well-being of society). If it did, well then yes, I would admit that it was moral. In fact, it would appear moral to more or less everyone.” Harris is being rhetorical – he does not believe corporal punishment is moral – but the point stands.
  • An immediate pitfall of this approach is that it does not qualify corporal punishment as the best way to raise emotionally healthy children who behave well.
  • The virtue ethicists inside us would argue that we ought not to foster a society in which people beat and humiliate children, never mind the consequences. There is also a reasonable and powerful argument based on personal freedom. Don’t children have the right to be free from violence in the public classroom? Don’t children have the right not to suffer intentional harm without consent? Isn’t that part of their “moral well-being”?
  • If consequences were really at the heart of all our moral deliberations, we might live in a very different society.
  • what if economies based on slavery lead to an increase in general happiness and flourishing for their respective societies? Would we admit slavery was moral? I hope not, because we value certain ideas about human rights and freedom. Or, what if the death penalty truly deterred crime? And what if we knew everyone we killed was guilty as charged, meaning no need for The Innocence Project? I would still object, on the grounds that it is morally wrong for us to kill people, even if they have committed the crime of which they are accused. Certain things hold, no matter the consequences.
  • We all do care about increasing human happiness and flourishing, and decreasing pain and suffering, and we all do care about the consequences of our beliefs and actions. But we focus on those criteria to differing degrees, and we have differing conceptions of how to achieve the respective goals – making us perhaps utilitarians and consequentialists in part, but not in whole.
  •  
    Is everyone a utilitarian and/or consequentialist, whether or not they know it? That is what some people - from Jeremy Bentham and John Stuart Mill to Sam Harris - would have you believe. But there are good reasons to be skeptical of such claims.
Weiye Loh

The Black Swan of Cairo | Foreign Affairs - 0 views

  • It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability "tail risks" to disappear from policymakers' fields of observation.
  • Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups. And it is the same misperception of the properties of natural systems that led to both the economic crisis of 2007-8 and the current turmoil in the Arab world. The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying.
  • ...21 more annotations...
  • Just as a robust economic system is one that encourages early failures (the concepts of "fail small" and "fail fast"), the U.S. government should stop supporting dictatorial regimes for the sake of pseudostability and instead allow political noise to rise to the surface. Making an economy robust in the face of business swings requires allowing risk to be visible; the same is true in politics.
  • Both the recent financial crisis and the current political crisis in the Middle East are grounded in the rise of complexity, interdependence, and unpredictability. Policymakers in the United Kingdom and the United States have long promoted policies aimed at eliminating fluctuation -- no more booms and busts in the economy, no more "Iranian surprises" in foreign policy. These policies have almost always produced undesirable outcomes. For example, the U.S. banking system became very fragile following a succession of progressively larger bailouts and government interventions, particularly after the 1983 rescue of major banks (ironically, by the same Reagan administration that trumpeted free markets). In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of "the great moderation" in 2004.
  • Washington stabilized the market with bailouts and by allowing certain companies to grow "too big to fail." Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • The foreign policy equivalent is to support the incumbent no matter what. And just as banks took wild risks thanks to Greenspan's implicit insurance policy, client governments such as Hosni Mubarak's in Egypt for years engaged in overt plunder thanks to similarly reliable U.S. support.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion.
  • In the realm of economics, price controls are designed to constrain volatility on the grounds that stable prices are a good thing. But although these controls might work in some rare situations, the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued. The risks of a dictatorship, no matter how seemingly stable, are no different, in the long run, from those of an artificially controlled price.
  • Such attempts to institutionally engineer the world come in two types: those that conform to the world as it is and those that attempt to reform the world. The nature of humans, quite reasonably, is to intervene in an effort to alter their world and the outcomes it produces. But government interventions are laden with unintended -- and unforeseen -- consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of "experts" in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile. Due to the complexity of markets, intricate regulations simply serve to generate fees for lawyers and profits for sophisticated derivatives traders who can build complicated financial products that skirt those regulations.
  • The life of a turkey before Thanksgiving is illustrative: the turkey is fed for 1,000 days and every day seems to confirm that the farmer cares for it -- until the last day, when confidence is maximal. The "turkey problem" occurs when a naive analysis of stability is derived from the absence of past variations. Likewise, confidence in stability was maximal at the onset of the financial crisis in 2007.
  • The turkey problem for humans is the result of mistaking one environment for another. Humans simultaneously inhabit two systems: the linear and the complex. The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as "tipping points." Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans' sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities. Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.
  • The system is responsible, not the components. But after the financial crisis of 2007-8, many people thought that predicting the subprime meltdown would have helped. It would not have, since it was a symptom of the crisis, not its underlying cause. Likewise, Obama's blaming "bad intelligence" for his administration's failure to predict the crisis in Egypt is symptomatic of both the misunderstanding of complex systems and the bad policies involved.
  • Obama's mistake illustrates the illusion of local causal chains -- that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved. As such, blaming the CIA is as foolish as funding it to forecast such events. Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.
  • Most explanations being offered for the current turmoil in the Middle East follow the "catalysts as causes" confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships. But Bahrain and Libya are countries with high gdps that can afford to import grain and other commodities. Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied -- what physicists call "percolation theory," in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
  • Humans fear randomness -- a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced fitness and increased chances of survival, it can have the reverse effect in today's complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of "great moderation." This is not to say that any and all volatility should be embraced. Insurance should not be banned, for example.
  • But alongside the "catalysts as causes" confusion sit two mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing). This leads to the desire to impose man-made solutions
  • Variation is information. When there is no variation, there is no information. This explains the CIA's failure to predict the Egyptian revolution and, a generation before, the Iranian Revolution -- in both cases, the revolutionaries themselves did not have a clear idea of their relative strength with respect to the regime they were hoping to topple. So rather than subsidize and praise as a "force for stability" every tin-pot potentate on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be in the open, as Italy and Lebanon both show in different ways, creates the stability of small jumps.
  • As Seneca wrote in De clementia, "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all . . . just as trees that have been trimmed throw out again countless branches." The imposition of peace through repeated punishment lies at the heart of many seemingly intractable conflicts, including the Israeli-Palestinian stalemate. Furthermore, dealing with seemingly reliable high-level officials rather than the people themselves prevents any peace treaty signed from being robust. The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty. Treaties that are negotiated with the consent of a broad swath of the populations on both sides of a conflict tend to survive. Just as no central bank is powerful enough to dictate stability, no superpower can be powerful enough to guarantee solid peace alone.
  • As Jean-Jacques Rousseau put it, "A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom." With freedom comes some unpredictable fluctuation. This is one of life's packages: there is no freedom without noise -- and no stability without volatility.∂
Weiye Loh

Robert W. Fogel Investigates Human Evolution - NYTimes.com - 0 views

  • Cambridge University Press will publish the capstone of this inquiry, “The Changing Body: Health, Nutrition, and Human Development in the Western World Since 1700,” just a few weeks shy of Mr. Fogel’s 85th birthday. The book, which sums up the work of dozens of researchers on one of the most ambitious projects undertaken in economic history, is sure to renew debates over Mr. Fogel’s groundbreaking theories about what some regard as the most significant development in humanity’s long history.
  • Mr. Fogel and his co-authors, Roderick Floud, Bernard Harris and Sok Chul Hong, maintain that “in most if not quite all parts of the world, the size, shape and longevity of the human body have changed more substantially, and much more rapidly, during the past three centuries than over many previous millennia.” What’s more, they write, this alteration has come about within a time frame that is “minutely short by the standards of Darwinian evolution.”
  • “The rate of technological and human physiological change in the 20th century has been remarkable,” Mr. Fogel said in an telephone interview from Chicago, where he is the director of the Center for Population Economics at the University of Chicago’s business school. “Beyond that, a synergy between the improved technology and physiology is more than the simple addition of the two.”
  • ...2 more annotations...
  • This “technophysio evolution,” powered by advances in food production and public health, has so outpaced traditional evolution, the authors argue, that people today stand apart not just from every other species, but from all previous generations of Homo sapiens as well.
  •  “I don’t know that there is a bigger story in human history than the improvements in health, which include height, weight, disability and longevity,” said Samuel H. Preston, one of the world’s leading demographers and a sociologist at the University of Pennsylvania. Without the 20th century’s improvements in nutrition, sanitation and medicine, only half of the current American population would be alive today, he said.
  •  
    For nearly three decades, the Nobel Prize-winning economist Robert W. Fogel and a small clutch of colleagues have assiduously researched what the size and shape of the human body say about economic and social changes throughout history, and vice versa. Their research has spawned not only a new branch of historical study but also a provocative theory that technology has sped human evolution in an unprecedented way during the past century.
Weiye Loh

Genome Biology | Full text | A Faustian bargain - 0 views

  • on October 1st, you announced that the departments of French, Italian, Classics, Russian and Theater Arts were being eliminated. You gave several reasons for your decision, including that 'there are comparatively fewer students enrolled in these degree programs.' Of course, your decision was also, perhaps chiefly, a cost-cutting measure - in fact, you stated that this decision might not have been necessary had the state legislature passed a bill that would have allowed your university to set its own tuition rates. Finally, you asserted that the humanities were a drain on the institution financially, as opposed to the sciences, which bring in money in the form of grants and contracts.
  • I'm sure that relatively few students take classes in these subjects nowadays, just as you say. There wouldn't have been many in my day, either, if universities hadn't required students to take a distribution of courses in many different parts of the academy: humanities, social sciences, the fine arts, the physical and natural sciences, and to attain minimal proficiency in at least one foreign language. You see, the reason that humanities classes have low enrollment is not because students these days are clamoring for more relevant courses; it's because administrators like you, and spineless faculty, have stopped setting distribution requirements and started allowing students to choose their own academic programs - something I feel is a complete abrogation of the duty of university faculty as teachers and mentors. You could fix the enrollment problem tomorrow by instituting a mandatory core curriculum that included a wide range of courses.
  • the vast majority of humanity cannot handle freedom. In giving humans the freedom to choose, Christ has doomed humanity to a life of suffering.
  • ...7 more annotations...
  • in Dostoyevsky's parable of the Grand Inquisitor, which is told in Chapter Five of his great novel, The Brothers Karamazov. In the parable, Christ comes back to earth in Seville at the time of the Spanish Inquisition. He performs several miracles but is arrested by Inquisition leaders and sentenced to be burned at the stake. The Grand Inquisitor visits Him in his cell to tell Him that the Church no longer needs Him. The main portion of the text is the Inquisitor explaining why. The Inquisitor says that Jesus rejected the three temptations of Satan in the desert in favor of freedom, but he believes that Jesus has misjudged human nature.
  • I'm sure the budgetary problems you have to deal with are serious. They certainly are at Brandeis University, where I work. And we, too, faced critical strategic decisions because our income was no longer enough to meet our expenses. But we eschewed your draconian - and authoritarian - solution, and a team of faculty, with input from all parts of the university, came up with a plan to do more with fewer resources. I'm not saying that all the specifics of our solution would fit your institution, but the process sure would have. You did call a town meeting, but it was to discuss your plan, not let the university craft its own. And you called that meeting for Friday afternoon on October 1st, when few of your students or faculty would be around to attend. In your defense, you called the timing 'unfortunate', but pleaded that there was a 'limited availability of appropriate large venue options.' I find that rather surprising. If the President of Brandeis needed a lecture hall on short notice, he would get one. I guess you don't have much clout at your university.
  • As for the argument that the humanities don't pay their own way, well, I guess that's true, but it seems to me that there's a fallacy in assuming that a university should be run like a business. I'm not saying it shouldn't be managed prudently, but the notion that every part of it needs to be self-supporting is simply at variance with what a university is all about.
  • You seem to value entrepreneurial programs and practical subjects that might generate intellectual property more than you do 'old-fashioned' courses of study. But universities aren't just about discovering and capitalizing on new knowledge; they are also about preserving knowledge from being lost over time, and that requires a financial investment.
  • what seems to be archaic today can become vital in the future. I'll give you two examples of that. The first is the science of virology, which in the 1970s was dying out because people felt that infectious diseases were no longer a serious health problem in the developed world and other subjects, such as molecular biology, were much sexier. Then, in the early 1990s, a little problem called AIDS became the world's number 1 health concern. The virus that causes AIDS was first isolated and characterized at the National Institutes of Health in the USA and the Institute Pasteur in France, because these were among the few institutions that still had thriving virology programs. My second example you will probably be more familiar with. Middle Eastern Studies, including the study of foreign languages such as Arabic and Persian, was hardly a hot subject on most campuses in the 1990s. Then came September 11, 2001. Suddenly we realized that we needed a lot more people who understood something about that part of the world, especially its Muslim culture. Those universities that had preserved their Middle Eastern Studies departments, even in the face of declining enrollment, suddenly became very important places. Those that hadn't - well, I'm sure you get the picture.
  • one of your arguments is that not every place should try to do everything. Let other institutions have great programs in classics or theater arts, you say; we will focus on preparing students for jobs in the real world. Well, I hope I've just shown you that the real world is pretty fickle about what it wants. The best way for people to be prepared for the inevitable shock of change is to be as broadly educated as possible, because today's backwater is often tomorrow's hot field. And interdisciplinary research, which is all the rage these days, is only possible if people aren't too narrowly trained. If none of that convinces you, then I'm willing to let you turn your institution into a place that focuses on the practical, but only if you stop calling it a university and yourself the President of one. You see, the word 'university' derives from the Latin 'universitas', meaning 'the whole'. You can't be a university without having a thriving humanities program. You will need to call SUNY Albany a trade school, or perhaps a vocational college, but not a university. Not anymore.
  • I started out as a classics major. I'm now Professor of Biochemistry and Chemistry. Of all the courses I took in college and graduate school, the ones that have benefited me the most in my career as a scientist are the courses in classics, art history, sociology, and English literature. These courses didn't just give me a much better appreciation for my own culture; they taught me how to think, to analyze, and to write clearly. None of my sciences courses did any of that.
Weiye Loh

TwitLonger: "Once we abandon human valuations as the sole reference system for human ac... - 0 views

  • Humans have no way of entering into communication with other species. All that happens is that some human agent argues on behalf of another species on the pretext that she or he knows what serves that species. When we abandon human valuations and logical discourse about them, the 'interests of Nature' therefore become an excuse for some self-appointed elite to overrule human valuations. The protagonists will claim superior knowledge about what is good for nature conservation and then enforce their decisions against the wishes of the majority of people.
  •  
    Once we abandon human valuations as the sole reference system for human action, we have to ask whose valuations are to replace them
Weiye Loh

Online "Toon porn" - 20 views

I must correct that never in my arguments did I mentioned that the interpreter is the problem. I was merely answering YZ's question if cartoon characters can be deemed as representative of human be...

online cartoon anime pornography ethics

Jude John

What's so Original in Academic Research? - 26 views

Thanks for your comments. I may have appeared to be contradictory, but what I really meant was that ownership of IP should not be a motivating factor to innovate. I realise that in our capitalistic...

Weiye Loh

Short Sharp Science: Computer beats human at Japanese chess for first time - 0 views

  • A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time.
  • computers have been beating humans at western chess for years, and when IBM's Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn't happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.
  • Japan's national broadcaster, NHK, reported that Akara "aggressively pursued Shimizu from the beginning". It's the first time a computer has beaten a professional human player.
  • ...2 more annotations...
  • The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu's defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.
  • Perhaps the association doesn't mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player.
  •  
    Computer beats human at Japanese chess for first time
Weiye Loh

Review: What Rawls Hath Wrought | The National Interest - 0 views

  • Almost never used in English before the 1940s, “human rights” were mentioned in the New York Times five times as often in 1977 as in any prior year of the newspaper’s history. By the nineties, human rights had become central to the thinking not only of liberals but also of neoconservatives, who urged military intervention and regime change in the faith that these freedoms would blossom once tyranny was toppled. From being almost peripheral, the human-rights agenda found itself at the heart of politics and international relations.
  • In fact, it has become entrenched in extremis: nowadays, anyone who is skeptical about human rights is angrily challenged
  • The contemporary human-rights movement is demonstrably not the product of a revulsion against the worst crimes of Nazism. For one thing, the Holocaust did not figure in the deliberations that led up to the Universal Declaration of Human Rights adopted by the UN in 1948.
  • ...1 more annotation...
  • Contrary to received history, the rise of human rights had very little to do with the worst crime against humanity ever committed.
Weiye Loh

What humans know that Watson doesn't - CNN.com - 0 views

  • One of the most frustrating experiences produced by the winter from hell is dealing with the airlines' automated answer systems. Your flight has just been canceled and every second counts in getting an elusive seat. Yet you are stuck in an automated menu spelling out the name of your destination city.
  • Even more frustrating is knowing that you will never get to ask the question you really want to ask, as it isn't an option: "If I drive to Newark and board my Flight to Tel Aviv there will you cancel my whole trip, as I haven't started from my ticketed airport of origin, Ithaca?"
  • A human would immediately understand the question and give you an answer. That's why knowledgeable travelers rush to the nearest airport when they experience a cancellation, so they have a chance to talk to a human agent who can override the computer, rather than rebook by phone (more likely wait on hold and listen to messages about how wonderful a destination Tel Aviv is) or talk to a computer.
  • ...6 more annotations...
  • There is no doubt the IBM supercomputer Watson gave an impressive performance on "Jeopardy!" this week. But I was worried by the computer's biggest fluff Tuesday night. In answer to the question about naming a U.S. city whose first airport is named after a World War II hero and its second after a World War II battle, it gave Toronto, Ontario. Not even close!
  • Both the humans on the program knew the correct answer: Chicago. Even a famously geographically challenged person like me
  • Why did I know it? Because I have spent enough time stranded at O'Hare to have visited the monument to Butch O'Hare in the terminal. Watson, who has not, came up with the wrong answer. This reveals precisely what Watson lacks -- embodiment.
  • Watson has never traveled anywhere. Humans travel, so we know all sorts of stuff about travel and airports that a computer doesn't know. It is the informal, tacit, embodied knowledge that is the hardest for computers to grasp, but it is often such knowledge that is most crucial to our lives.
  • Providing unique answers to questions limited to around 25 words is not the same as dealing with real problems of an emotionally distraught passenger in an open system where there may not be a unique answer.
  • Watson beating the pants out of us on "Jeopardy!" is fun -- rather like seeing a tractor beat a human tug-of-war team. Machines have always been better than humans at some tasks.
Weiye Loh

CultureLab: Thoughts within thoughts make us human - 0 views

  • Corballis reckons instead that the thought processes that made language possible were non-linguistic, but had recursive properties to which language adapted: "Where Chomsky views thought through the lens of language, I prefer to view language though the lens of thought." From this, says Corballis, follows a better understanding of how humans actually think - and a very different perspective on language and its evolution.
  • So how did recursion help ancient humans pull themselves up by their cognitive bootstraps? It allowed us to engage in mental time travel, says Corballis, the recursive operation whereby we recall past episodes into present consciousness and imagine future ones, and sometimes even insert fictions into reality.
  • theory of mind is uniquely highly developed in humans: I may know not only what you are thinking, says Corballis, but also that you know what I am thinking. Most - but not all - language depends on this capability.
  • ...3 more annotations...
  • Corballis's theories also help make sense of apparent anomalies such as linguist and anthropologist Daniel's Everett's work on the Pirahã, an Amazonian people who hit the headlines because of debates over whether their language has any words for colours, and, crucially, numbers. Corballis now thinks that the Pirahã language may not be that unusual, and cites the example of other languages from oral cultures, such as the Iatmul language of New Guinea, which is also said to lack recursion.
  • The emerging point is that recursion developed in the mind and need not be expressed in a language. But, as Corballis is at pains to point out, although recursion was critical to the evolution of the human mind, it is not one of those "modules" much beloved of evolutionary psychologists, many of which are said to have evolved in the Pleistocene. Nor did it depend on some genetic mutation or the emergence of some new neuron or brain structure. Instead, he suggests it came of progressive increases in short-term memory and capacity for hierarchical organisation - all dependent in turn on incremental increases in brain size.
  • But as Corballis admits, this brain size increase was especially rapid in the Pleistocene. These incremental changes can lead to sudden more substantial jumps - think water boiling or balloons popping. In mathematics these shifts are called catastrophes. So, notes Corballis, wryly, "we may perhaps conclude that the emergence of the human mind was catastrophic". Let's hope that's not too prescient.
  •  
    His new book, The Recursive Mind: The origins of human language, thought, and civilization, is a fascinating and well-grounded exposition of the nature and power of recursion. In its ultra-reasonable way, this is quite a revolutionary book because it attacks key notions about language and thought. Most notably, it disputes the idea, argued especially by linguist Noam Chomsky, that thought is fundamentally linguistic - in other words, you need language before you can have thoughts.
Weiye Loh

Don't dumb me down | Science | The Guardian - 0 views

  • Science stories usually fall into three families: wacky stories, scare stories and "breakthrough" stories.
  • these stories are invariably written by the science correspondents, and hotly followed, to universal jubilation, with comment pieces, by humanities graduates, on how bonkers and irrelevant scientists are.
  • A close relative of the wacky story is the paradoxical health story. Every Christmas and Easter, regular as clockwork, you can read that chocolate is good for you (www.badscience.net/?p=67), just like red wine is, and with the same monotonous regularity
  • ...19 more annotations...
  • At the other end of the spectrum, scare stories are - of course - a stalwart of media science. Based on minimal evidence and expanded with poor understanding of its significance, they help perform the most crucial function for the media, which is selling you, the reader, to their advertisers. The MMR disaster was a fantasy entirely of the media's making (www.badscience.net/?p=23), which failed to go away. In fact the Daily Mail is still publishing hysterical anti-immunisation stories, including one calling the pneumococcus vaccine a "triple jab", presumably because they misunderstood that the meningitis, pneumonia, and septicaemia it protects against are all caused by the same pneumococcus bacteria (www.badscience.net/?p=118).
  • people periodically come up to me and say, isn't it funny how that Wakefield MMR paper turned out to be Bad Science after all? And I say: no. The paper always was and still remains a perfectly good small case series report, but it was systematically misrepresented as being more than that, by media that are incapable of interpreting and reporting scientific data.
  • Once journalists get their teeth into what they think is a scare story, trivial increases in risk are presented, often out of context, but always using one single way of expressing risk, the "relative risk increase", that makes the danger appear disproportionately large (www.badscience.net/?p=8).
  • he media obsession with "new breakthroughs": a more subtly destructive category of science story. It's quite understandable that newspapers should feel it's their job to write about new stuff. But in the aggregate, these stories sell the idea that science, and indeed the whole empirical world view, is only about tenuous, new, hotly-contested data
  • Articles about robustly-supported emerging themes and ideas would be more stimulating, of course, than most single experimental results, and these themes are, most people would agree, the real developments in science. But they emerge over months and several bits of evidence, not single rejiggable press releases. Often, a front page science story will emerge from a press release alone, and the formal academic paper may never appear, or appear much later, and then not even show what the press reports claimed it would (www.badscience.net/?p=159).
  • there was an interesting essay in the journal PLoS Medicine, about how most brand new research findings will turn out to be false (www.tinyurl.com/ceq33). It predictably generated a small flurry of ecstatic pieces from humanities graduates in the media, along the lines of science is made-up, self-aggrandising, hegemony-maintaining, transient fad nonsense; and this is the perfect example of the parody hypothesis that we'll see later. Scientists know how to read a paper. That's what they do for a living: read papers, pick them apart, pull out what's good and bad.
  • Scientists never said that tenuous small new findings were important headline news - journalists did.
  • there is no useful information in most science stories. A piece in the Independent on Sunday from January 11 2004 suggested that mail-order Viagra is a rip-off because it does not contain the "correct form" of the drug. I don't use the stuff, but there were 1,147 words in that piece. Just tell me: was it a different salt, a different preparation, a different isomer, a related molecule, a completely different drug? No idea. No room for that one bit of information.
  • Remember all those stories about the danger of mobile phones? I was on holiday at the time, and not looking things up obsessively on PubMed; but off in the sunshine I must have read 15 newspaper articles on the subject. Not one told me what the experiment flagging up the danger was. What was the exposure, the measured outcome, was it human or animal data? Figures? Anything? Nothing. I've never bothered to look it up for myself, and so I'm still as much in the dark as you.
  • Because papers think you won't understand the "science bit", all stories involving science must be dumbed down, leaving pieces without enough content to stimulate the only people who are actually going to read them - that is, the people who know a bit about science.
  • Compare this with the book review section, in any newspaper. The more obscure references to Russian novelists and French philosophers you can bang in, the better writer everyone thinks you are. Nobody dumbs down the finance pages.
  • Statistics are what causes the most fear for reporters, and so they are usually just edited out, with interesting consequences. Because science isn't about something being true or not true: that's a humanities graduate parody. It's about the error bar, statistical significance, it's about how reliable and valid the experiment was, it's about coming to a verdict, about a hypothesis, on the back of lots of bits of evidence.
  • science journalists somehow don't understand the difference between the evidence and the hypothesis. The Times's health editor Nigel Hawkes recently covered an experiment which showed that having younger siblings was associated with a lower incidence of multiple sclerosis. MS is caused by the immune system turning on the body. "This is more likely to happen if a child at a key stage of development is not exposed to infections from younger siblings, says the study." That's what Hawkes said. Wrong! That's the "Hygiene Hypothesis", that's not what the study showed: the study just found that having younger siblings seemed to be somewhat protective against MS: it didn't say, couldn't say, what the mechanism was, like whether it happened through greater exposure to infections. He confused evidence with hypothesis (www.badscience.net/?p=112), and he is a "science communicator".
  • how do the media work around their inability to deliver scientific evidence? They use authority figures, the very antithesis of what science is about, as if they were priests, or politicians, or parent figures. "Scientists today said ... scientists revealed ... scientists warned." And if they want balance, you'll get two scientists disagreeing, although with no explanation of why (an approach at its most dangerous with the myth that scientists were "divided" over the safety of MMR). One scientist will "reveal" something, and then another will "challenge" it
  • The danger of authority figure coverage, in the absence of real evidence, is that it leaves the field wide open for questionable authority figures to waltz in. Gillian McKeith, Andrew Wakefield, Kevin Warwick and the rest can all get a whole lot further, in an environment where their authority is taken as read, because their reasoning and evidence is rarely publicly examined.
  • it also reinforces the humanities graduate journalists' parody of science, for which we now have all the ingredients: science is about groundless, incomprehensible, didactic truth statements from scientists, who themselves are socially powerful, arbitrary, unelected authority figures. They are detached from reality: they do work that is either wacky, or dangerous, but either way, everything in science is tenuous, contradictory and, most ridiculously, "hard to understand".
  • This misrepresentation of science is a direct descendant of the reaction, in the Romantic movement, against the birth of science and empiricism more than 200 years ago; it's exactly the same paranoid fantasy as Mary Shelley's Frankenstein, only not as well written. We say descendant, but of course, the humanities haven't really moved forward at all, except to invent cultural relativism, which exists largely as a pooh-pooh reaction against science. And humanities graduates in the media, who suspect themselves to be intellectuals, desperately need to reinforce the idea that science is nonsense: because they've denied themselves access to the most significant developments in the history of western thought for 200 years, and secretly, deep down, they're angry with themselves over that.
  • had a good spirited row with an eminent science journalist, who kept telling me that scientists needed to face up to the fact that they had to get better at communicating to a lay audience. She is a humanities graduate. "Since you describe yourself as a science communicator," I would invariably say, to the sound of derisory laughter: "isn't that your job?" But no, for there is a popular and grand idea about, that scientific ignorance is a useful tool: if even they can understand it, they think to themselves, the reader will. What kind of a communicator does that make you?
  • Science is done by scientists, who write it up. Then a press release is written by a non-scientist, who runs it by their non-scientist boss, who then sends it to journalists without a science education who try to convey difficult new ideas to an audience of either lay people, or more likely - since they'll be the ones interested in reading the stuff - people who know their way around a t-test a lot better than any of these intermediaries. Finally, it's edited by a whole team of people who don't understand it. You can be sure that at least one person in any given "science communication" chain is just juggling words about on a page, without having the first clue what they mean, pretending they've got a proper job, their pens all lined up neatly on the desk.
Weiye Loh

microphilosophy | The most human human? - 0 views

  •  
    Can artificial intelligence teach us about what it means to be human? That is the fascinating question behind Brian Christian's recent book, The Most Human Human. In this programme, Julian Baggini is in conversation with Christian, recorded live at the Bristol Festival of Ideas at the Arnolfini Centre earlier this year.
1 - 20 of 254 Next › Last »
Showing 20 items per page