Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Literary Criticism

Rss Feed Group items tagged

Weiye Loh

Hamlet and the region of death - The Boston Globe - 0 views

  • To many readers — and to some of Moretti’s fellow academics — the very notion of quantitative literary studies can seem like an offense to that which made literature worth studying in the first place: its meaning and beauty. For Moretti, however, moving literary scholarship beyond reading is the key to producing new knowledge about old texts — even ones we’ve been studying for centuries.
  •  
    Franco Moretti, however, often doesn't read the books he studies. Instead, he analyzes them as data. Working with a small group of graduate students, the Stanford University English professor has fed thousands of digitized texts into databases and then mined the accumulated information for new answers to new questions. How far, on average, do characters in 19th-century English novels walk over the course of a book? How frequently are new genres of popular fiction invented? How many words does the average novel's protagonist speak? By posing these and other questions, Moretti has become the unofficial leader of a new, more quantitative kind of literary study.
Weiye Loh

The Mechanic Muse - What Is Distant Reading? - NYTimes.com - 0 views

  • Lit Lab tackles literary problems by scientific means: hypothesis-testing, computational modeling, quantitative analysis. Similar efforts are currently proliferating under the broad rubric of “digital humanities,” but Moretti’s approach is among the more radical. He advocates what he terms “distant reading”: understanding literature not by studying particular texts, but by aggregating and analyzing massive amounts of data.
  • People recognize, say, Gothic literature based on castles, revenants, brooding atmospheres, and the greater frequency of words like “tremble” and “ruin.” Computers recognize Gothic literature based on the greater frequency of words like . . . “the.” Now, that’s interesting. It suggests that genres “possess distinctive features at every possible scale of analysis.” More important for the Lit Lab, it suggests that there are formal aspects of literature that people, unaided, cannot detect.
  • Distant reading might prove to be a powerful tool for studying literature, and I’m intrigued by some of the lab’s other projects, from analyzing the evolution of chapter breaks to quantifying the difference between Irish and English prose styles. But whatever’s happening in this paper is neither powerful nor distant. (The plot networks were assembled by hand; try doing that without reading Hamlet.) By the end, even Moretti concedes that things didn’t unfold as planned. Somewhere along the line, he writes, he “drifted from quantification to the qualitative analysis of plot.”
  • ...5 more annotations...
  • most scholars, whatever their disciplinary background, do not publish negative results.
  • I would admire it more if he didn’t elsewhere dismiss qualitative literary analysis as “a theological exercise.” (Moretti does not subscribe to literary-analytic pluralism: he has suggested that distant reading should supplant, not supplement, close reading.) The counterpoint to theology is science, and reading Moretti, it’s impossible not to notice him jockeying for scientific status. He appears now as literature’s Linnaeus (taxonomizing a vast new trove of data), now as Vesalius (exposing its essential skeleton), now as Galileo (revealing and reordering the universe of books), now as Darwin (seeking “a law of literary ­evolution”).
  • Literature is an artificial universe, and the written word, unlike the natural world, can’t be counted on to obey a set of laws. Indeed, Moretti often mistakes metaphor for fact. Those “skeletons” he perceives inside stories are as imposed as exposed; and literary evolution, unlike the biological kind, is largely an analogy. (As the author and critic Elif Batuman pointed out in an n+1 essay on Moretti’s earlier work, books actually are the result of intelligent design.)
  • Literature, he argues, is “a collective system that should be grasped as such.” But this, too, is a theology of sorts — if not the claim that literature is a system, at least the conviction that we can find meaning only in its totality.
  • The idea that truth can best be revealed through quantitative models dates back to the development of statistics (and boasts a less-than-benign legacy). And the idea that data is gold waiting to be mined; that all entities (including people) are best understood as nodes in a network; that things are at their clearest when they are least particular, most interchangeable, most aggregated — well, perhaps that is not the theology of the average lit department (yet). But it is surely the theology of the 21st century.
Weiye Loh

Rationally Speaking: Studying folk morality: philosophy, psychology, or what? - 0 views

  • in the magazine article Joshua mentions several studies of “folk morality,” i.e. of how ordinary people think about moral problems. The results are fascinating. It turns out that people’s views are correlated with personality traits, with subjects who score high on “openness to experience” being reliably more relativists than objectivists about morality (I am not using the latter term in the infamous Randyan meaning here, but as Knobe does, to indicate the idea that morality has objective bases).
  • Other studies show that people who are capable of considering multiple options in solving mathematical puzzles also tend to be moral relativists, and — in a study co-authored by Knobe himself — the very same situation (infanticide) was judged along a sliding scale from objectivism to relativism depending on whether the hypothetical scenario involved a fellow American (presumably sharing our same general moral values), the member of an imaginary Amazonian tribe (for which infanticide was acceptable), and an alien from the planet Pentar (belonging to a race whose only goal in life is to turn everything into equilateral pentagons, and killing individuals that might get in the way of that lofty objective is a duty). Oh, and related research also shows that young children tend to be objectivists, while young adults are usually relativists — but that later in life one’s primordial objectivism apparently experiences a comeback.
  • This is all very interesting social science, but is it philosophy? Granted, the differences between various disciplines are often not clear cut, and of course whenever people engage in truly inter-disciplinary work we should simply applaud the effort and encourage further work. But I do wonder in what sense, if any, the kinds of results that Joshua and his colleagues find have much to do with moral philosophy.
  • ...6 more annotations...
  • there seems to me the potential danger of confusing various categories of moral discourse. For instance, are the “folks” studied in these cases actually relativist, or perhaps adherents to one of several versions of moral anti-realism? The two are definitely not the same, but I doubt that the subjects in question could tell the difference (and I wouldn’t expect them to, after all they are not philosophers).
  • why do we expect philosophers to learn from “folk morality” when we do not expect, say, physicists to learn from folk physics (which tends to be Aristotelian in nature), or statisticians from people’s understanding of probability theory (which is generally remarkably poor, as casino owners know very well)? Or even, while I’m at it, why not ask literary critics to discuss Shakespeare in light of what common folks think about the bard (making sure, perhaps, that they have at least read his works, and not just watched the movies)?
  • Hence, my other examples of stat (i.e., math) and literary criticism. I conceive of philosophy in general, and moral philosophy in particular, as more akin to a (science-informed, to be sure) mix between logic and criticism. Some moral philosophy consists in engaging an “if ... then” sort of scenario, akin to logical-mathematical thinking, where one begins with certain axioms and attempts to derive the consequences of such axioms. In other respects, moral philosophers exercise reflective criticism concerning those consequences as they might be relevant to practical problems.
  • For instance, we may write philosophically about abortion, and begin our discussion from a comparison of different conceptions of “person.” We might conclude that “if” one adopts conception X of what a person is, “then” abortion is justifiable under such and such conditions; while “if” one adopts conception Y of a person, “then” abortion is justifiable under a different set of conditions, or not justifiable at all. We could, of course, back up even further and engage in a discussion of what “personhood” is, thus moving from moral philosophy to metaphysics.
  • Nowhere in the above are we going to ask “folks” what they think a person is, or how they think their implicit conception of personhood informs their views on abortion. Of course people’s actual views on abortion are crucial — especially for public policy — and they are intrinsically interesting to social scientists. But they don’t seem to me to make much more contact with philosophy than the above mentioned popular opinions on Shakespeare make contact with serious literary criticism. And please, let’s not play the cheap card of “elitism,” unless we are willing to apply the label to just about any intellectual endeavor, in any discipline.
  • There is one area in which experimental philosophy can potentially contribute to philosophy proper (as opposed to social science). Once we have a more empirically grounded understanding of what people’s moral reasoning actually is, then we can analyze the likely consequences of that reasoning for a variety of societal issues. But now we would be doing something more akin to political than moral philosophy.
  •  
    My colleague Joshua Knobe at Yale University recently published an intriguing article in The Philosopher's Magazine about the experimental philosophy of moral decision making. Joshua and I have had a nice chat during a recent Rationally Speaking podcast dedicated to experimental philosophy, but I'm still not convinced about the whole enterprise.
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Why Do Intellectuals Oppose Capitalism? - 0 views

  • Not all intellectuals are on the "left."
  • But in their case, the curve is shifted and skewed to the political left.
  • By intellectuals, I do not mean all people of intelligence or of a certain level of education, but those who, in their vocation, deal with ideas as expressed in words, shaping the word flow others receive. These wordsmiths include poets, novelists, literary critics, newspaper and magazine journalists, and many professors. It does not include those who primarily produce and transmit quantitatively or mathematically formulated information (the numbersmiths) or those working in visual media, painters, sculptors, cameramen. Unlike the wordsmiths, people in these occupations do not disproportionately oppose capitalism. The wordsmiths are concentrated in certain occupational sites: academia, the media, government bureaucracy.
  • ...6 more annotations...
  • Wordsmith intellectuals fare well in capitalist society; there they have great freedom to formulate, encounter, and propagate new ideas, to read and discuss them. Their occupational skills are in demand, their income much above average. Why then do they disproportionately oppose capitalism? Indeed, some data suggest that the more prosperous and successful the intellectual, the more likely he is to oppose capitalism. This opposition to capitalism is mainly "from the left" but not solely so. Yeats, Eliot, and Pound opposed market society from the right.
  • can distinguish two types of explanation for the relatively high proportion of intellectuals in opposition to capitalism. One type finds a factor unique to the anti-capitalist intellectuals. The second type of explanation identifies a factor applying to all intellectuals, a force propelling them toward anti-capitalist views. Whether it pushes any particular intellectual over into anti-capitalism will depend upon the other forces acting upon him. In the aggregate, though, since it makes anti-capitalism more likely for each intellectual, such a factor will produce a larger proportion of anti-capitalist intellectuals. Our explanation will be of this second type. We will identify a factor which tilts intellectuals toward anti-capitalist attitudes but does not guarantee it in any particular case.
  • Intellectuals now expect to be the most highly valued people in a society, those with the most prestige and power, those with the greatest rewards. Intellectuals feel entitled to this. But, by and large, a capitalist society does not honor its intellectuals. Ludwig von Mises explains the special resentment of intellectuals, in contrast to workers, by saying they mix socially with successful capitalists and so have them as a salient comparison group and are humiliated by their lesser status.
  • Why then do contemporary intellectuals feel entitled to the highest rewards their society has to offer and resentful when they do not receive this? Intellectuals feel they are the most valuable people, the ones with the highest merit, and that society should reward people in accordance with their value and merit. But a capitalist society does not satisfy the principle of distribution "to each according to his merit or value." Apart from the gifts, inheritances, and gambling winnings that occur in a free society, the market distributes to those who satisfy the perceived market-expressed demands of others, and how much it so distributes depends on how much is demanded and how great the alternative supply is. Unsuccessful businessmen and workers do not have the same animus against the capitalist system as do the wordsmith intellectuals. Only the sense of unrecognized superiority, of entitlement betrayed, produces that animus.
  • What factor produced feelings of superior value on the part of intellectuals? I want to focus on one institution in particular: schools. As book knowledge became increasingly important, schooling--the education together in classes of young people in reading and book knowledge--spread. Schools became the major institution outside of the family to shape the attitudes of young people, and almost all those who later became intellectuals went through schools. There they were successful. They were judged against others and deemed superior. They were praised and rewarded, the teacher's favorites. How could they fail to see themselves as superior? Daily, they experienced differences in facility with ideas, in quick-wittedness. The schools told them, and showed them, they were better.
  • We have refined the hypothesis somewhat. It is not simply formal schools but formal schooling in a specified social context that produces anti-capitalist animus in (wordsmith) intellectuals. No doubt, the hypothesis requires further refining. But enough. It is time to turn the hypothesis over to the social scientists, to take it from armchair speculations in the study and give it to those who will immerse themselves in more particular facts and data. We can point, however, to some areas where our hypothesis might yield testable consequences and predictions. First, one might predict that the more meritocratic a country's school system, the more likely its intellectuals are to be on the left. (Consider France.) Second, those intellectuals who were "late bloomers" in school would not have developed the same sense of entitlement to the very highest rewards; therefore, a lower percentage of the late-bloomer intellectuals will be anti-capitalist than of the early bloomers. Third, we limited our hypothesis to those societies (unlike Indian caste society) where the successful student plausibly could expect further comparable success in the wider society. In Western society, women have not heretofore plausibly held such expectations, so we would not expect the female students who constituted part of the academic upper class yet later underwent downward mobility to show the same anti-capitalist animus as male intellectuals. We might predict, then, that the more a society is known to move toward equality in occupational opportunity between women and men, the more its female intellectuals will exhibit the same disproportionate anti-capitalism its male intellectuals show.
1 - 5 of 5
Showing 20 items per page