Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Artificial Intelligence

Rss Feed Group items tagged

Weiye Loh

microphilosophy | The most human human? - 0 views

  •  
    Can artificial intelligence teach us about what it means to be human? That is the fascinating question behind Brian Christian's recent book, The Most Human Human. In this programme, Julian Baggini is in conversation with Christian, recorded live at the Bristol Festival of Ideas at the Arnolfini Centre earlier this year.
Weiye Loh

BioCentre - 0 views

  • Humanity’s End. The main premise of the book is that proposals that would supposedly promise to make us smarter like never before or add thousands of years to our live seem rather far fetched and the domain of mere fantasy. However, it is these very proposals which form the basis of many of the ideas and thoughts presented by advocates of radical enhancement and which are beginning to move from the sidelines to the centre of main stream discussion. A variety of technologies and therapies are being presented to us as options to expand our capabilities and capacities in order for us to become something other than human.
  • Agar takes issue with this and argues against radical human enhancement. He structures his analysis and discussion by focusing on four key figures and their proposals which help to form the core of the case for radical enhancement debate.  First to be examined by Agar is Ray Kurzweil who argues that Man and Machine will become one as technology allows us to transcend our biology. Second, is Aubrey de Grey who is a passionate advocate and pioneer of anti-ageing therapies which allow us to achieve “longevity escape velocity”. Next is Nick Bostrom, a leading transhumanist who defends the morality and rationality of enhancement and finally James Hughes who is a keen advocate of a harmonious democracy of the enhanced and un-enhanced.
  • He avoids falling into any of the pitfalls of basing his argument solely upon the “playing God” question but instead seeks to posit a well founded argument in favour of the precautionary principle.
  • ...10 more annotations...
  • Agar directly tackles Hughes’ ideas of a “democratic transhumanism.” Here as post-humans and humans live shoulder to shoulder in wonderful harmony, all persons have access to the technologies they want in order to promote their own flourishing.  Under girding all of this is the belief that no human should feel pressurised to become enhance. Agar finds no comfort with this and instead can foresee a situation where it would be very difficult for humans to ‘choose’ to remain human.  The pressure to radically enhance would be considerable given the fact that the radically enhanced would no doubt be occupying the positions of power in society and would consider the moral obligation to utilise to the full enhancement techniques as being a moral imperative for the good of society.  For those who were able to withstand then a new underclass would no doubt emerge between the enhanced and the un-enhanced. This is precisely the kind of society which Hughes appears to be overly optimistic will not emerge but which is more akin to Lee Silver’s prediction of the future with the distinction made between the "GenRich" and the "naturals”.  This being the case, the author proposes that we have two options: radical enhancement is either enforced across the board or banned outright. It is the latter option which Agar favours but crucially does not elaborate further on so it is unclear as to how he would attempt such a ban given the complexity of the issue. This is disappointing as any general initial reflections which the author felt able to offer would have added to the discussion and added further strength to his line of argument.
  • A Transhuman Manifesto The final focus for Agar is James Hughes, who published his transhumanist manifesto Citizen Cyborg in 2004. Given the direct connection with politics and public policy this for me was a particularly interesting read. The basic premise to Hughes argument is that once humans and post humans recognise each other as citizens then this will mark the point at which they will be able to get along with each other.
  • Agar takes to task the argument Bostrom made with Toby Ord, concerning claims against enhancement. Bostrom and Ord argue that it boils down to a preference for the status quo; current human intellects and life spans are preferred and deemed best because they are what we have now and what we are familiar with (p. 134).  Agar discusses the fact that in his view, Bostrom falls into a focalism – focusing on and magnifying the positives whilst ignoring the negative implications.  Moreover, Agar goes onto develop and reiterate his earlier point that the sort of radical enhancements Bostrom et al enthusiastically support and promote take us beyond what is human so they are no longer human. It therefore cannot be said to be human enhancement given the fact that the traits or capacities that such enhancement afford us would be in many respects superior to ours, but they would not be ours.
  • With his law of accelerating returns and talk of the Singularity Ray Kurzweil proposes that we are speeding towards a time when our outdated systems of neurons and synapses will be traded for far more efficient electronic circuits, allowing us to become artificially super-intelligent and transferring our minds from brains into machines.
  • Having laid out the main ideas and thinking behind Kurzweil’s proposals, Agar makes the perceptive comment that despite the apparent appeal of greater processing power it would nevertheless be no longer human. Introducing chips to the human body and linking into the human nervous system to computers as per Ray Kurzweil’s proposals will prove interesting but it goes beyond merely creating a copy of us in order to that future replication and uploading can take place. Rather it will constitute something more akin to an upgrade. Electrochemical signals that the brain use to achieve thought travel at 100 metres per second. This is impressive but contrast this with the electrical signals in a computer which travel at 300 million metres per second then the distinction is clear. If the predictions are true how will such radically enhanced and empowered beings live not only the unenhanced but also what will there quality of life really be? In response, Agar favours something what he calls “rational biological conservatism” (pg. 57) where we set limits on how intelligent we can become in light of the fact that it will never be rational to us for human beings to completely upload their minds onto computers.
  • Agar then proceeds to argue that in the pursuit of Kurzweil enhanced capacities and capabilities we might accidentally undermine capacities of equal value. This line of argument would find much sympathy from those who consider human organisms in “ecological” terms, representing a profound interconnectedness which when interfered with presents a series of unknown and unexpected consequences. In other words, our specifies-specific form of intelligence may well be linked to species-specific form of desire. Thus, if we start building upon and enhancing our capacity to protect and promote deeply held convictions and beliefs then due to the interconnectedness, it may well affect and remove our desire to perform such activities (page 70). Agar’s subsequent discussion and reference to the work of Jerry Foder, philosopher and cognitive scientist is particularly helpful in terms of the functioning of the mind by modules and the implications of human-friendly AI verses human-unfriendly AI.
  • In terms of the author’s discussion of Aubrey de Grey, what is refreshing to read from the outset is the author’s clear grasp of Aubrey’s ideas and motivation. Some make the mistake of thinking he is the man who wants to live forever, when in actual fact this is not the case.  De Grey wants to reverse the ageing process - Strategies for Engineered Negligible Senescence (SENS) so that people are living longer and healthier lives. Establishing this clear distinction affords the author the opportunity to offer more grounded critiques of de Grey’s than some of his other critics. The author makes plain that de Grey’s immediate goal is to achieve longevity escape velocity (LEV), where anti-ageing therapies add years to life expectancy faster than age consumes them.
  • In weighing up the benefits of living significantly longer lives, Agar posits a compelling argument that I had not fully seen before. In terms of risk, those radically enhanced to live longer may actually be the most risk adverse and fearful people to live. Taking the example of driving a car, a forty year-old senescing human being who gets into their car to drive to work and is involved in a fatal accident “stands to lose, at most, a few healthy, youthful years and a slightly larger number of years with reduced quality” (p.116). In stark contrast should a negligibly senescent being who drives a car and is involved in an accident resulting in their death, stands to lose on average one thousand, healthy, youthful years (p.116).  
  • De Grey’s response to this seems a little flippant; with the end of ageing comes an increased sense of risk-aversion so the desire for risky activity such as driving will no longer be prevalent. Moreover, plus because we are living for longer we will not be in such a hurry to get to places!  Virtual reality comes into its own at this point as a means by which the negligibly senescent being ‘adrenaline junkie’ can be engaged with activities but without the associated risks. But surely the risk is part of the reason why they would want to engage in snow boarding, bungee jumping et al in the first place. De Grey’s strategy seemingly fails to appreciate the extent to which human beings want “direct” contact with the “real” world.
  • Continuing this idea further though, Agar’s subsequent discussion of the role of fire-fighters is an interesting one.  A negligibly senescent fire fighter may stand to loose more when they are trapped in a burning inferno but being negligibly senescent means that they are better fire-fighters by virtue of increase vitality. Having recently heard de Grey speak and had the privilege of discussing his ideas further with him, Agar’s discussion of De Grey were a particular highlight of the book and made for an engaging discussion. Whilst expressing concern and doubt in relation to De Grey’s ideas, Agar is nevertheless quick and gracious enough to acknowledge that if such therapies could be achieved then De Grey is probably the best person to comment on and achieve such therapies given the depth of knowledge and understanding that he has built up in this area.
Weiye Loh

Paris Review - The Grandmaster Hoax, Lincoln Michel - 0 views

  • The Turk was a hoax, although he was incorrect about the workings of the trick. Rather than a man hidden inside the wooden body, the seemingly exposed innards of the cabinet did not extend all the way back. A hidden grandmaster slid around when the cabinet doors were opened and closed. The concealed grandmaster controlled The Turk’s movements and followed the game’s action through a clever arrangement of magnets and strings.
Weiye Loh

Read the Web :: Carnegie Mellon University - 0 views

  •  
    Can computers learn to read? We think so. "Read the Web" is a research project that attempts to create a computer system that learns over time to read the web. Since January 2010, our computer system called NELL (Never-Ending Language Learner) has been running continuously, attempting to perform two tasks each day: First, it attempts to "read," or extract facts from text found in hundreds of millions of web pages (e.g., playsInstrument(George_Harrison, guitar)). Second, it attempts to improve its reading competence, so that tomorrow it can extract more facts from the web, more accurately. So far, NELL has accumulated over 15 million candidate beliefs by reading the web, and it is considering these at different levels of confidence. NELL has high confidence in 928,295 of these beliefs - these are displayed on this website. It is not perfect, but NELL is learning. You can track NELL's progress below or @cmunell on Twitter, browse and download its knowledge base, read more about our technical approach, or join the discussion group.
Weiye Loh

To Die of Having Lived: an article by Richard Rapport | The American Scholar - 0 views

  • Although it may be a form of arrogance to attempt the management of one’s own death, is it better to surrender that management to the arrogance of someone else? We know we can’t avoid dying, but perhaps we can avoid dying badly.
  • Dodging a bad death has become more complicated over the past 30 or 40 years. Before the advent of technological creations that permit vital functions to be sustained so well artificially, medical ethics were less obstructed by abstract definitions of death.
  • generally agreed upon criteria for brain death have simplified some of these confusions, but they have not solved them. The broad middle ground between our usual health and consciousness as the expected norm on the one hand, and clear death of the brain on the other, lacks certainty.
    • Weiye Loh
       
      Isn't it always the case? That dichotomous relationships aren't clearly and equally demarcated but some how we attempt to split them up... through polemical discourses and rhetorics...
  • ...13 more annotations...
  • Doctors and other health-care workers can provide patients and families with probabilities for improvement or recovery, but statistics are hardly what is wanted. Even after profound injury or the diagnosis of an illness that statistically is nearly certain to be fatal, what people hear is the word nearly. How do we not allow the death of someone who might be saved? How do we avoid the equally intolerable salvation of a clinically dead person?
    • Weiye Loh
       
      In what situations do we hear the word "nearly" and in what situations do we hear the word "certain"? When we're dealing with a person's life, we hear "nearly", but when we're dealing with climate science we hear "certain"? 
  • Injecting political agendas into these end-of-life complexities only confuses the problem without providing a solution.
  • The questions are how, when, and on whose terms we depart. It is curious that people might be convinced to avoid confronting death while they are healthy, and that society tolerates ad hominem arguments that obstruct rational debate over an authentic problem of ethics in an uncertain world.
  • Any seriously ill older person who winds up in a modern CCU immediately yields his autonomy. Even if the doctors, nurses, and staff caring for him are intelligent, properly educated, humanistically motivated, and correct in the diagnosis, they are manipulated not only by the tyranny of technology but also by the rules established in their hospital. In addition, regulations of local and state licensing agencies and the federal government dictate the parameters of what the hospital workers do and how they do it, and every action taken is heavily influenced by legal experts committed to their client’s best interest—values frequently different from the patient’s. Once an acutely ill patient finds himself in this situation, everything possible will be done to save him; he is in no position to offer an opinion.
  • Eventually, after hours or days (depending on the illness and who is involved in the care), the wisdom of continuing treatment may come into question. But by then the patient will likely have been intubated and placed on a ventilator, a feeding tube may have been inserted, a catheter placed in the bladder, IVs started in peripheral veins or threaded through a major blood vessel near the heart, and monitors attached to record an EKG, arterial blood pressure, temperature, respirations, oxygen saturation, even pressure inside the skull. Sequential pressure devices will have been wrapped around the legs. All the digital marvels have alarms, so if one isn’t working properly, an annoying beep, like the sound of a backing truck, will fill the patient’s room. Vigilant nurses will add drugs by the dozens to the IV or push them into ports. Families will hover uncertainly. Meanwhile, tens and perhaps hundreds of thousands of dollars will have been transferred from one large corporation—an insurer of some kind—to another large corporation—a health care delivery system of some kind.
    • Weiye Loh
       
      Perhaps then, the value of life is not so much life in itself per se, but rather the transactive amount it generates. 
  • While the expense of the drugs, manpower, and technology required to make a diagnosis and deliver therapy does sop up resources and thereby deny treatment that might be more fruitful for others, including the 46.3 million Americans who, according to the Census Bureau, have no health insurance, that isn’t the real dilemma of the critical care unit.
  • the problem isn’t getting into or out of a CCU; the predicament is in knowing who should be there in the first place.
  • Before we become ill, we tend to assume that everything can be treated and treated successfully. The prelate in Willa Cather’s Death Comes for the Archbishop was wiser. Approaching the end, he said to a younger priest, “I shall not die of a cold, my son. I shall die of having lived.”
  • best way to avoid unwanted admission to a critical care unit at or near the end of life is to write an advance directive (a living will or durable power of attorney for health care) when healthy.
  • , not many people do this and, more regrettably, often the document is not included in the patient’s chart or it goes unnoticed.
  • Since we are sure to die of having lived, we should prepare for death before the last minute. Entire corporations are dedicated to teaching people how to retire well. All of their written materials, Web sites, and seminars begin with the same advice: start planning early. Shouldn’t we at least occasionally think about how we want to leave our lives?
  • Flannery O’Connor, who died young of systemic lupus, wrote, “Sickness before death is a very appropriate thing and I think those who don’t have it miss one of God’s mercies.”
  • Because we understand the metaphor of conflict so well, we are easily sold on the idea that we must resolutely fight against our afflictions (although there was once an article in The Onion titled “Man Loses Cowardly Battle With Cancer”). And there is a place to contest an abnormal metabolism, a mutation, a trauma, or an infection. But there is also a place to surrender. When the organs have failed, when the mind has dissolved, when the body that has faithfully housed us for our lifetime has abandoned us, what’s wrong with giving up?
  •  
    Spring 2010 To Die of Having Lived A neurological surgeon reflects on what patients and their families should and should not do when the end draws near
Weiye Loh

What humans know that Watson doesn't - CNN.com - 0 views

  • One of the most frustrating experiences produced by the winter from hell is dealing with the airlines' automated answer systems. Your flight has just been canceled and every second counts in getting an elusive seat. Yet you are stuck in an automated menu spelling out the name of your destination city.
  • Even more frustrating is knowing that you will never get to ask the question you really want to ask, as it isn't an option: "If I drive to Newark and board my Flight to Tel Aviv there will you cancel my whole trip, as I haven't started from my ticketed airport of origin, Ithaca?"
  • A human would immediately understand the question and give you an answer. That's why knowledgeable travelers rush to the nearest airport when they experience a cancellation, so they have a chance to talk to a human agent who can override the computer, rather than rebook by phone (more likely wait on hold and listen to messages about how wonderful a destination Tel Aviv is) or talk to a computer.
  • ...6 more annotations...
  • There is no doubt the IBM supercomputer Watson gave an impressive performance on "Jeopardy!" this week. But I was worried by the computer's biggest fluff Tuesday night. In answer to the question about naming a U.S. city whose first airport is named after a World War II hero and its second after a World War II battle, it gave Toronto, Ontario. Not even close!
  • Both the humans on the program knew the correct answer: Chicago. Even a famously geographically challenged person like me
  • Why did I know it? Because I have spent enough time stranded at O'Hare to have visited the monument to Butch O'Hare in the terminal. Watson, who has not, came up with the wrong answer. This reveals precisely what Watson lacks -- embodiment.
  • Watson has never traveled anywhere. Humans travel, so we know all sorts of stuff about travel and airports that a computer doesn't know. It is the informal, tacit, embodied knowledge that is the hardest for computers to grasp, but it is often such knowledge that is most crucial to our lives.
  • Providing unique answers to questions limited to around 25 words is not the same as dealing with real problems of an emotionally distraught passenger in an open system where there may not be a unique answer.
  • Watson beating the pants out of us on "Jeopardy!" is fun -- rather like seeing a tractor beat a human tug-of-war team. Machines have always been better than humans at some tasks.
Weiye Loh

Technology and Inequality - Kenneth Rogoff - Project Syndicate - 0 views

  • it is easy to forget that market forces, if allowed to play out, might eventually exert a stabilizing role. Simply put, the greater the premium for highly skilled workers, the greater the incentive to find ways to economize on employing their talents.
  • one of the main ways to uncover cheating is by using a computer program to detect whether a player’s moves consistently resemble the favored choices of various top computer programs.
  • many other examples of activities that were once thought exclusively the domain of intuitive humans, but that computers have come to dominate. Many teachers and schools now use computer programs to scan essays for plagiarism
  • ...4 more annotations...
  • computer-grading of essays is a surging science, with some studies showing that computer evaluations are fairer, more consistent, and more informative than those of an average teacher, if not necessarily of an outstanding one.
  • the relative prices of grains, metals, and many other basic goods tended to revert to a central mean tendency over sufficiently long periods. We conjectured that even though random discoveries, weather events, and technologies might dramatically shift relative values for certain periods, the resulting price differentials would create incentives for innovators to concentrate more attention on goods whose prices had risen dramatically.
  • people are not goods, but the same principles apply. As skilled labor becomes increasingly expensive relative to unskilled labor, firms and businesses have a greater incentive to find ways to “cheat” by using substitutes for high-price inputs. The shift might take many decades, but it also might come much faster as artificial intelligence fuels the next wave of innovation.
  • Many commentators seem to believe that the growing gap between rich and poor is an inevitable byproduct of increasing globalization and technology. In their view, governments will need to intervene radically in markets to restore social balance. I disagree. Yes, we need genuinely progressive tax systems, respect for workers’ rights, and generous aid policies on the part of rich countries. But the past is not necessarily prologue: given the remarkable flexibility of market forces, it would be foolish, if not dangerous, to infer rising inequality in relative incomes in the coming decades by extrapolating from recent trends.
  •  
    Until now, the relentless march of technology and globalization has played out hugely in favor of high-skilled labor, helping to fuel record-high levels of income and wealth inequality around the world. Will the endgame be renewed class warfare, with populist governments coming to power, stretching the limits of income redistribution, and asserting greater state control over economic life?
Weiye Loh

The Mechanic Muse - What Is Distant Reading? - NYTimes.com - 0 views

  • Lit Lab tackles literary problems by scientific means: hypothesis-testing, computational modeling, quantitative analysis. Similar efforts are currently proliferating under the broad rubric of “digital humanities,” but Moretti’s approach is among the more radical. He advocates what he terms “distant reading”: understanding literature not by studying particular texts, but by aggregating and analyzing massive amounts of data.
  • People recognize, say, Gothic literature based on castles, revenants, brooding atmospheres, and the greater frequency of words like “tremble” and “ruin.” Computers recognize Gothic literature based on the greater frequency of words like . . . “the.” Now, that’s interesting. It suggests that genres “possess distinctive features at every possible scale of analysis.” More important for the Lit Lab, it suggests that there are formal aspects of literature that people, unaided, cannot detect.
  • Distant reading might prove to be a powerful tool for studying literature, and I’m intrigued by some of the lab’s other projects, from analyzing the evolution of chapter breaks to quantifying the difference between Irish and English prose styles. But whatever’s happening in this paper is neither powerful nor distant. (The plot networks were assembled by hand; try doing that without reading Hamlet.) By the end, even Moretti concedes that things didn’t unfold as planned. Somewhere along the line, he writes, he “drifted from quantification to the qualitative analysis of plot.”
  • ...5 more annotations...
  • most scholars, whatever their disciplinary background, do not publish negative results.
  • I would admire it more if he didn’t elsewhere dismiss qualitative literary analysis as “a theological exercise.” (Moretti does not subscribe to literary-analytic pluralism: he has suggested that distant reading should supplant, not supplement, close reading.) The counterpoint to theology is science, and reading Moretti, it’s impossible not to notice him jockeying for scientific status. He appears now as literature’s Linnaeus (taxonomizing a vast new trove of data), now as Vesalius (exposing its essential skeleton), now as Galileo (revealing and reordering the universe of books), now as Darwin (seeking “a law of literary ­evolution”).
  • Literature is an artificial universe, and the written word, unlike the natural world, can’t be counted on to obey a set of laws. Indeed, Moretti often mistakes metaphor for fact. Those “skeletons” he perceives inside stories are as imposed as exposed; and literary evolution, unlike the biological kind, is largely an analogy. (As the author and critic Elif Batuman pointed out in an n+1 essay on Moretti’s earlier work, books actually are the result of intelligent design.)
  • Literature, he argues, is “a collective system that should be grasped as such.” But this, too, is a theology of sorts — if not the claim that literature is a system, at least the conviction that we can find meaning only in its totality.
  • The idea that truth can best be revealed through quantitative models dates back to the development of statistics (and boasts a less-than-benign legacy). And the idea that data is gold waiting to be mined; that all entities (including people) are best understood as nodes in a network; that things are at their clearest when they are least particular, most interchangeable, most aggregated — well, perhaps that is not the theology of the average lit department (yet). But it is surely the theology of the 21st century.
Weiye Loh

Google is funding a new software project that will automate writing local news - Recode - 0 views

  •  
    "Radar aims to automate local reporting with large public databases from government agencies or local law enforcement - basically roboticizing the work of reporters. Stories from the data will be penned using Natural Language Generation, which converts information gleaned from the data into words. The robotic reporters won't be working alone. The grant includes funds allocated to hire five journalists to identify datasets, as well as curate and edit the news articles generated from Radar. The project also aims to create automated ways to add images and video to robot-made stories."
1 - 10 of 10
Showing 20 items per page