Skip to main content

Home/ TOK Friends/ Group items tagged nativism

Rss Feed Group items tagged

Javier E

Geology's Timekeepers Are Feuding - The Atlantic - 0 views

  • , in 2000, the Nobel Prize-winning chemist Paul Crutzen won permanent fame for stratigraphy. He proposed that humans had so throughly altered the fundamental processes of the planet—through agriculture, climate change, and nuclear testing, and other phenomena—that a new geological epoch had commenced: the Anthropocene, the age of humans.
  • Zalasiewicz should know. He is the chair of the Anthropocene working group, which the ICS established in 2009 to investigate whether the new epoch deserved a place in stratigraphic time.
  • In 2015, the group announced that the Anthropocene was a plausible new layer and that it should likely follow the Holocene. But the team has yet to propose a “golden spike” for the epoch: a boundary in the sedimentary rock record where the Anthropocene clearly begins.
  • ...12 more annotations...
  • Officially, the Holocene is still running today. You have lived your entire life in the Holocene, and the Holocene has constituted the geological “present” for as long as there have been geologists.But if we now live in a new epoch, the Anthropocene, then the ICS will have to chop the Holocene somewhere. It will have to choose when the Holocene ended, and it will move some amount of time out of the purview of the Holocene working group and into that of the Anthropocene working group.
  • This is politically difficult. And right now, the Anthropocene working group seems intent on not carving too deep into the Holocene. In a paper published earlier this year in Earth-Science Reviews, the Anthropocene working group’s members strongly imply that they will propose starting the new epoch in the mid-20th century.
  • Some geologists argue that the Anthropocene started even earlier: perhaps 4,000 or 6,000 years ago, as farmers began to remake the land surface.“Most of the world’s forests that were going to be converted to cropland and agriculture were already cleared well before 1950,” says Bill Ruddiman, a geology professor at the University of Virginia and an advocate of this extremely early Anthropocene.
  • “Most of the world’s prairies and steppes that were going to be cleared for crops were already gone, by then. How can you argue the Anthropocene started in 1950 when all of the major things that affect Earth’s surface were already over?”Van der Pluijm agreed that the Anthropocene working group was picking 1950 for “not very good reasons.”“Agriculture was the revolution that allowed society to develop,” he said. “That was really when people started to force the land to work for them. That massive land movement—it’s like a landslide, except it’s a humanslide. And it is not, of course, as dramatic as today’s motion of land, but it starts the clock.”
  • This muddle had to stop. The Holocene comes up constantly in discussions of modern global warming. Geologists and climate scientists did not make their jobs any easier by slicing it in different ways and telling contradictory stories about it.
  • This process started almost 10 years ago. For this reason, Zalasiewicz, the chair of the Anthropocene working group, said he wasn’t blindsided by the new subdivisions at all. In fact, he voted to adopt them as a member of the Quaternary working group.“Whether the Anthropocene works with a unified Holocene or one that’s in three parts makes for very little difference,” he told me.In fact, it had made the Anthropocene group’s work easier. “It has been useful to compare the scale of the two climate events that mark the new boundaries [within the Holocene] with the kind of changes that we’re assessing in the Anthropocene. It has been quite useful to have the compare and contrast,” he said. “Our view is that some of the changes in the Anthropocene are rather bigger.”
  • Zalasiewicz said that he and his colleagues were going as fast as they could. When the working group group began its work in 2009, it was “really starting from scratch,” he told me.While other working groups have a large body of stratigraphic research to consider, the Anthropocene working group had nothing. “We had to spend a fair bit of time deciding whether the Anthropocene was geology at all,” he said. Then they had to decide where its signal could show up. Now, they’re looking for evidence that shows it.
  • This cycle of “glacials” and “interglacials” has played out about 50 times over the last several million years. When the Holocene began, it was only another interglacial—albeit the one we live in. Until recently, glaciers were still on schedule to descend in another 30,000 years or so.Yet geologists still call the Holocene an epoch, even though they do not bestow this term on any of the previous 49 interglacials. It get special treatment because we live in it.
  • Much of this science is now moot. Humanity’s vast emissions of greenhouse gas have now so warmed the climate that they have offset the next glaciation. They may even knock us out of the ongoing cycle of Ice Ages, sending the Earth hurtling back toward a “greenhouse” climate after the more amenable “icehouse” climate during which humans evolved.For this reason, van der Pluijm wants the Anthropocene to supplant the Holocene entirely. Humans made their first great change to the environment at the close of the last glaciation, when they seem to have hunted the world’s largest mammals—the wooly mammoth, the saber-toothed tiger—to extinction. Why not start the Anthropocene then?He would even rename the pre-1800 period “the Holocene Age” as a consolation prize:
  • Zalasiewicz said he would not start the Anthropocene too early in time, as it would be too work-intensive for the field to rename such a vast swath of time. “The early-Anthropocene idea would crosscut against the Holocene as it’s seen by Holocene workers,” he said. If other academics didn’t like this, they could create their own timescales and start the Anthropocene Epoch where they choose. “We have no jurisdiction over the word Anthropocene,” he said.
  • Ruddiman, the University of Virginia professor who first argued for a very early Anthropocene, now makes an even broader case. He’s not sure it makes sense to formally define the Anthropocene at all. In a paper published this week, he objects to designating the Anthropocene as starting in the 1950s—and then he objects to delineating the Anthropocene, or indeed any new geological epoch, by name. “Keep the use of the term informal,” he told me. “Don’t make it rigid. Keep it informal so people can say the early-agricultural Anthropocene, or the industrial-era Anthropocene.”
  • “This is the age of geochemical dating,” he said. Geologists have stopped looking to the ICS to place each rock sample into the rock sequence. Instead, field geologists use laboratory techniques to get a precise year or century of origin for each rock sample. “The community just doesn’t care about these definitions,” he said.
Javier E

I Actually Read Woody Allen's Memoir - The Atlantic - 0 views

  • I’m a Woody Allen person, not because I disbelieve Dylan—in fact, I believe her. I’m a Woody Allen person because his movies helped shape me, and I can’t unsee them, the way I can’t un-read The Great Gatsby or un-hear “Gimme Shelter.” These are things that informed my sensibilities. All of them are part of me.
  • As to our opinion about his past, one thing is for sure: He couldn’t care less about it. “Rather than live on in the hearts and mind of the public,” he says in the final lines of the book, “I prefer to live on in my apartment.”Exit laughing.
  • the scene in Hannah and Her Sisters in which the Woody Allen character, distraught by his realization that there is no God and considering suicide, stumbles into a revival house to find the movie playing. He says in voice-over: The movie was a film that I’d seen many times in my life since I was a kid, and I always loved it. And I’m watching these people up on the screen and I started getting hooked on the film. And I started to feel,  How can you even think of killing yourself? I mean, isn’t it so stupid? I mean, look at all the people up there on the screen. They’re real funny—and what if the worst is true? What if there’s no God and you only go around once and that’s it? Well, you know, don’t you want to be part of the experience? I did.I do.
  • ...7 more annotations...
  • Can we still enjoy his work?  Of course we can, because the movies don’t really belong to Woody Allen any more than they do to you and me.
  • Some wrongs are so great that no legal or bureaucratic process can ever make things right. At some point, the only way to get unchained from a monster is through forgiveness. It would seem impossible for Geimer to be able to forgive Polanski, but she did—who understands how grace operates?—and has apparently been at peace ever since.
  • Woody Allen taught us that New York is the center of the world and L.A. is outer space. He installed himself in Elaine’s for a thousand dinners in the company of glittering figures of yesteryear (Norman Mailer! Liza Minnelli! Bill Bradley!) and made movie after movie and became one of the famous people the rest of the country associates most closely with the city.
  • He’s a 42-year-old guy with a 12th grade girlfriend, and if you want to understand the ’70s, maybe I have to tell you only that Vincent Canby’s review in The New York Times made no particular note of this fact other than to describe Tracy—played by Mariel Hemingway—as “a beautiful, 17-year-old nymphet with a turned-down mouth.” Or I could tell you that Manhattan was nominated for two Academy Awards and was widely loved by the in-crowd.
  • Soon enough he was writing, directing, and then acting in his own movies. He developed a particular method of moviemaking that was in some regards reminiscent of the studio system: he worked fast, almost never rehearsed, and rarely shot multiple takes or engineered complex shots: “As long as you’re dealing with comedy, particularly broad comedy, all you want is the scene should be lit, loud and fast.”
  • Allen is one of the great storytellers of his time, completely original, and any version of his life—including this one, in which we are obviously in the hands of an unreliable narrator, although no more so than in any of his autobiographical movies—can only be riveting. In a matter of phrases, he accomplishes things it takes a lesser writer several chapters to establish
  • We have the damn thing. What are we going to do with it? I suppose we could start—why not?—by actually reading it. And within just a phrase or two, you realize why people were afraid of it: Allen is a matchless comic writer and one whose voice is so well known by his aging fans that it’s as though the book is pouring into you through a special receiver dedicated just to him. Woody Allen does a great Woody Allen.
Javier E

Psychological nativism - Wikipedia - 0 views

  • In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs.
  • Some nativists believe that specific beliefs or preferences are "hard-wired". For example, one might argue that some moral intuitions are innate or that color preferences are innate. A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism.
  • Nativism has a history in philosophy, particularly as a reaction to the straightforward empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate.
  • ...14 more annotations...
  • The philosopher Immanuel Kant (1724–1804) argued in his Critique of Pure Reason that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Arthur Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one—causality—which presupposes the others.
  • Modern nativism is most associated with the work of Jerry Fodor (1935–2017), Noam Chomsky (b. 1928), and Steven Pinker (b. 1954), who argue that humans from birth have certain cognitive modules (specialised genetically inherited psychological abilities) that allow them to learn and acquire certain skills, such as language.
  • For example, children demonstrate a facility for acquiring spoken language but require intensive training to learn to read and write. This poverty of the stimulus observation became a principal component of Chomsky's argument for a "language organ"—a genetically inherited neurological module that confers a somewhat universal understanding of syntax that all neurologically healthy humans are born with, which is fine-tuned by an individual's experience with their native language
  • In The Blank Slate (2002), Pinker similarly cites the linguistic capabilities of children, relative to the amount of direct instruction they receive, as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition).
  • A number of other theorists[1][2][3] have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.[4]
  • Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success.[20] Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (see also the distributional hypothesis).[14][21][22][23]
  • The term universal grammar (or UG) is used for the purported innate biological properties of the human brain, whatever exactly they turn out to be, that are responsible for children's successful acquisition of a native language during the first few years of life. The person most strongly associated with the hypothesising of UG is Noam Chomsky, although the idea of Universal Grammar has clear historical antecedents at least as far back as the 1300s, in the form of the Speculative Grammar of Thomas of Erfurt.
  • This evidence is all the more impressive when one considers that most children do not receive reliable corrections for grammatical errors.[9] Indeed, even children who for medical reasons cannot produce speech, and therefore have no possibility of producing an error in the first place, have been found to master both the lexicon and the grammar of their community's language perfectly.[10] The fact that children succeed at language acquisition even when their linguistic input is severely impoverished, as it is when no corrective feedback is available, is related to the argument from the poverty of the stimulus, and is another claim for a central role of UG in child language acquisition.
  • Researchers at Blue Brain discovered a network of about fifty neurons which they believed were building blocks of more complex knowledge but contained basic innate knowledge that could be combined in different more complex ways to give way to acquired knowledge, like memory.[11
  • experience, the tests would bring about very different characteristics for each rat. However, the rats all displayed similar characteristics which suggest that their neuronal circuits must have been established previously to their experiences. The Blue Brain Project research suggests that some of the "building blocks" of knowledge are genetic and present at birth.[11]
  • modern nativist theory makes little in the way of specific falsifiable and testable predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of "psychological creationism". As influential psychologist Henry L. Roediger III remarked that "Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him".[13]
  • , Chomsky's poverty of the stimulus argument is controversial within linguistics.[14][15][16][17][18][19]
  • Neither the five-year-old nor the adults in the community can easily articulate the principles of the grammar they are following. Experimental evidence shows that infants come equipped with presuppositions that allow them to acquire the rules of their language.[6]
  • Paul Griffiths, in "What is Innateness?", argues that innateness is too confusing a concept to be fruitfully employed as it confuses "empirically dissociated" concepts. In a previous paper, Griffiths argued that innateness specifically confuses these three distinct biological concepts: developmental fixity, species nature, and intended outcome. Developmental fixity refers to how insensitive a trait is to environmental input, species nature reflects what it is to be an organism of a certain kind, and the intended outcome is how an organism is meant to develop.[24]
johnsonel7

What's Lost When a Language Disappears | The New Republic - 0 views

  • The cultural practices and locales that define the hundreds of Native communities dotting the North American landscape are grounded in languages. Each is unique, with distinct dialects, accents, and slang. There are words, phrases, and concepts that do not exist in the American English lexicon, that confounding colonizer speech that Native Americans were forced to adopt and master. And nearly all of them are in danger of going extinct. In 1998, there were 175 Indigenous languages still in use within the United States. Today, there are 115. With each passing year, as elders are laid to rest and new babies are born, Native people lose their tongue.
  • Learning a Native language is not only about knowledge or authenticity; it extends a symbol of a thriving and unique culture to the rising generation. It’s the cadence of survival. And if it goes silent, a great tradition is broken.
  • The latest version of the bill, coming at the tail end of what the United Nations has dubbed the Year of Indigenous Language, will seek to lower the bill’s previous class-size restrictions, which were preventing tribes from obtaining federal grants to establish their own language programs because many smaller tribes had lower enrollment numbers than what the grant applications required.
  • ...2 more annotations...
  • The general experience of losing one’s language to American preference is not unique to being Indigenous. It’s an American philosophy, one that is echoed in the experience of the children of immigrants whose parents do not teach them their language, in an attempt to shield them from racism. The president enforces a regime of assimilation when he declares, “This is a country where we speak English. It’s English. You have to speak English!”
  • Many of these languages are not even a full lifetime away from disappearing. They exist for as long as the heart of the elder who carries the words continues to beat. One day, that heart will stop, and so too will the language.
oliviaodon

How Do We Learn Languages? | Brain Blogger - 0 views

  • The use of sound is one of the most common methods of communication both in the animal kingdom and between humans.
  • human speech is a very complex process and therefore needs intensive postnatal learning to be used effectively. Furthermore, to be effective the learning phase should happen very early in life and it assumes a normally functioning hearing and brain systems.
  • Nowadays, scientists and doctors are discovering the important brain zones involved in the processing of language information. Those zones are reassembled in a number of a language networks including the Broca, the Wernicke, the middle temporal, the inferior parietal and the angular gyrus. The variety of such brain zones clearly shows that the language processing is a very complex task. On the functional level, decoding a language begins in the ear where the incoming sounds are summed in the auditory nerve as an electrical signal and delivered to the auditory cortex where neurons extract auditory objects from that signal.
  • ...6 more annotations...
  • The effectiveness of this process is so great that human brain is able to accurately identify words and whole phrases from a noisy background. This power of analysis brings to minds the great similarity between the brain and powerful supercomputers.
  • Until the last decade few studies compared the language acquisition in adults and children. Thanks to modern imaging and electroencephalography we are now able to address this question.
  • infants begin their lives with a very flexible brain that allows them to acquire virtually any language they are exposed to. Moreover, they can learn a language words almost equally by listening or by visual coding. This brain plasticity is the motor drive of the children capability of “cracking the speech code” of a language. With time, this ability is dramatically decreased and adults find it harder to acquire a new language.
  • clearly demonstrated that there are anatomical brain differences between fast and slow learners of foreign languages. By analyzing a group of people having a homogenous language background, scientists found that differences in specific brain regions can predict the capacity of a person to learn a second language.
  • Functional imaging of the brain revealed that activated brain parts are different between native and non-native speakers. The superior temporal gyrus is an important brain region involved in language learning. For a native speaker this part is responsible for automated processing of lexical retrieval and the build of phrase structure. In native speakers this zone is much more activated than in non-native ones.
  • Language acquisition is a long-term process by which information are stored in the brain unconsciously making them appropriate to oral and written usage. In contrast, language learning is a conscious process of knowledge acquisition that needs supervision and control by the person.
  •  
    Another cool article about how the brain works and language (inductive reasoning). 
cvanderloo

Kennewick Man will be reburied, but quandaries around human remains won't - 0 views

  • Following bitter disputes, five Native American groups in the Pacific Northwest have come together to facilitate the reburial of an individual they know as “Ancient One.
  • For them, data gathering was simply not a priority. Instead, they sought to return their ancestors to the earth.
  • (NAGPRA). It aimed to address the problematic history behind museum human remains collections.
  • ...10 more annotations...
  • Since NAGPRA passed in 1990, the National Park Service estimates over 50,000 sets of human remains have been repatriated in the United States.
  • Museums in the U.S. and Europe have collected and studied human remains for well over a century, with the practice gaining considerable momentum after the Civil War.
  • . The skeletons provided better data about diseases and migration, as well as information about historic diet, with potential impact for living populations.
  • Some anthropologists were eager to scientifically test the bones hoping for clues about who the first Americans were and where they came from. But many Native Americans hesitated to support this scientific scrutiny (including tests which permanently destroy or damage the original bone), arguing it was disrespectful to their ancient ancestor. They wanted him laid to rest.
  • Presenting human remains as purely scientific specimens and historical curiosities hurt living descendants by treating entire populations as scientific resources rather than human beings. And by focusing mainly on nonwhite groups, the practice reinforced in subtle and direct ways the scientific racism permeating the era.
  • Hidden away from public view, the prehistoric remains were anything but forgotten. Many indigenous people came to view Kennewick Man as a symbol for the failings of the new NAGPRA law.
  • Last year, genetic testing finally proved something many people had suggested for some time: Kennewick Man is more closely related to Native Americans than any other living human group.
  • By some estimates, museums today house more than half a million individual Native American remains. Probably hundreds if not thousands of sets of skeletal remains will face these big questions in the coming decades.
  • Indicative of changing attitudes and ethical approaches to museum exhibition, recent calls to display Kennewick Man’s remains have largely been rebuked, despite potential for engaging large audiences.
  • Kennewick Man may be among the most high-profile cases of human remains going under the microscope – both in terms of the scientific study he was subject to and the intensity of the debate surrounding him – but he is certainly far from alone.
Javier E

WHICH IS THE BEST LANGUAGE TO LEARN? | More Intelligent Life - 2 views

  • For language lovers, the facts are grim: Anglophones simply aren’t learning them any more. In Britain, despite four decades in the European Union, the number of A-levels taken in French and German has fallen by half in the past 20 years, while what was a growing trend of Spanish-learning has stalled. In America, the numbers are equally sorry.
  • compelling reasons remain for learning other languages.
  • First of all, learning any foreign language helps you understand all language better—many Anglophones first encounter the words “past participle” not in an English class, but in French. Second, there is the cultural broadening. Literature is always best read in the original. Poetry and lyrics suffer particularly badly in translation. And learning another tongue helps the student grasp another way of thinking.
  • ...11 more annotations...
  • is Chinese the language of the future?
  • So which one should you, or your children, learn? If you take a glance at advertisements in New York or A-level options in Britain, an answer seems to leap out: Mandarin.
  • The practical reasons are just as compelling. In business, if the team on the other side of the table knows your language but you don’t know theirs, they almost certainly know more about you and your company than you do about them and theirs—a bad position to negotiate from.
  • This factor is the Chinese writing system (which Japan borrowed and adapted centuries ago). The learner needs to know at least 3,000-4,000 characters to make sense of written Chinese, and thousands more to have a real feel for it. Chinese, with all its tones, is hard enough to speak. But  the mammoth feat of memory required to be literate in Mandarin is harder still. It deters most foreigners from ever mastering the system—and increasingly trips up Chinese natives.
  • If you were to learn ten languages ranked by general usefulness, Japanese would probably not make the list. And the key reason for Japanese’s limited spread will also put the brakes on Chinese.
  • A recent survey reported in the People’s Daily found 84% of respondents agreeing that skill in Chinese is declining.
  • Fewer and fewer native speakers learn to produce characters in traditional calligraphy. Instead, they write their language the same way we do—with a computer. And not only that, but they use the Roman alphabet to produce Chinese characters: type in wo and Chinese language-support software will offer a menu of characters pronounced wo; the user selects the one desired. (Or if the user types in wo shi zhongguo ren, “I am Chinese”, the software detects the meaning and picks the right characters.) With less and less need to recall the characters cold, the Chinese are forgetting them
  • As long as China keeps the character-based system—which will probably be a long time, thanks to cultural attachment and practical concerns alike—Chinese is very unlikely to become a true world language, an auxiliary language like English, the language a Brazilian chemist will publish papers in, hoping that they will be read in Finland and Canada. By all means, if China is your main interest, for business or pleasure, learn Chinese. It is fascinating, and learnable—though Moser’s online essay, “Why Chinese is so damn hard,” might discourage the faint of heart and the short of time.
  • But if I was asked what foreign language is the most useful, and given no more parameters (where? for what purpose?), my answer would be French. Whatever you think of France, the language is much less limited than many people realise.
  • French ranks only 16th on the list of languages ranked by native speakers. But ranked above it are languages like Telegu and Javanese that no one would call world languages. Hindi does not even unite India. Also in the top 15 are Arabic, Spanish and Portuguese, major languages to be sure, but regionally concentrated. If your interest is the Middle East or Islam, by all means learn Arabic. If your interest is Latin America, Spanish or Portuguese is the way to go. Or both; learning one makes the second quite easy.
  • if you want another truly global language, there are surprisingly few candidates, and for me French is unquestionably top of the list. It can enhance your enjoyment of art, history, literature and food, while giving you an important tool in business and a useful one in diplomacy. It has native speakers in every region on earth. And lest we forget its heartland itself, France attracts more tourists than any other country—76.8m in 2010, according to the World Tourism Organisation, leaving America a distant second with 59.7m
carolinewren

Comment: If you speak Mandarin, your brain is different | SBS News - 1 views

  • We speak so effortlessly that most of us never think about it. But psychologists and neuroscientists are captivated by the human capacity to communicate with language.
  • Untangling the brain’s mechanisms for language has been a pillar of neuroscience since its inception. New research published in the Proceedings for the National Academy of Sciences about the different connections going on in the brains of Mandarin and English speakers, demonstrates just how flexible our ability to learn language really is.
  • Victims of stroke or traumatic brain injury to either of these crucial areas on the left side of the brain exhibited profound disabilities for producing and understanding language.
  • ...13 more annotations...
  • six to ten months children have already learned to be sensitive to the basic sounds, known as phonemes, that matter in their native language.
  • language requires real-time mappings between words and their meanings. This requires that the sounds heard in speech – decoded in the auditory cortex – must be integrated with knowledge about what they mean – in the frontal cortex.
  • Modern theories on connectionism – the idea that knowledge is distributed across different parts of the brain and not tucked into dedicated modules like Broca’s area – have compelled researchers to take a closer look.
  • Mandarin Chinese is a tonal language in which the same basic sounds can refer to vastly different things based on the tone with which it is spoken
  • non-tonal language such as English, tone might convey emotional information about the speaker, but indicates nothing about the meaning of the word that is spoken
  • found that these differences between Mandarin Chinese and English change the way the brain’s networks work.
  • researchers took advantage of the basic differences between Mandarin Chinese and English to investigate the differences between the language networks of native speakers of tonal and non-tonal languages. Thirty native Chinese speakers were matched on age, gender, and handedness (they were all right-handed) with a sample of native English speakers. All participants listened to intelligible and unintelligible speech and were asked to judge the gender of the speaker.
  • The first difference was the operation of the brain networks shared by English and Chinese speakers
  • English speakers showed stronger connectivity leading from Wernicke’s area to Broca’s area. This increased connectivity was attributed to English relying more heavily on phonological information, or sounds rather than tones.
  • Chinese speakers had stronger connections leading from an area of the brain called the anterior superior temporal gyrus – which has been identified as a “semantic hub” critical in supporting language – to both Broca’s and Wernicke’s area.
  • increased connectivity is attributed to the enhanced mapping of sound and meaning going on in people who speak tonal languages.
  • second difference showed activation in an area of the brain’s right hemisphere, but only among the Chinese speakers
  • findings emphasise the importance of developing a bilateral network between the two brain hemispheres to speak and understand languages, particularly for tonal languages like Mandarin Chinese.
Javier E

Writing, Typing, and Economics - The Atlantic - 0 views

  • The first lesson would have to do with the all-important issue of inspiration. All writers know that on some golden mornings they are touched by the wand — are on intimate terms with poetry and cosmic truth. I have experienced those moments myself. Their lesson is simple: It's a total illusion.
  • And the danger in the illusion is that you will wait for those moments. Such is the horror of having to face the typewriter that you will spend all your time waiting. I am persuaded that most writers, like most shoemakers, are about as good one day as the next (a point which Trollope made), hangovers apart. The difference is the result of euphoria, alcohol, or imagination. The meaning is that one had better go to his or her typewriter every morning and stay there regardless of the seeming result. It will be much the same.
  • Writers, in contrast, do nothing because they are waiting for inspiration.In my own case there are days when the result is so bad that no fewer than five revisions are required. However, when I'm greatly inspired, only four revisions are needed before, as I've often said, I put in that note of spontaneity which even my meanest critics concede
  • ...13 more annotations...
  • It helps greatly in the avoidance of work to be in the company of others who are also waiting for the golden moment. The best place to write is by yourself, because writing becomes an escape from the terrible boredom of your own personality. It's the reason that for years I've favored Switzerland, where I look at the telephone and yearn to hear it ring.
  • There may be inspired writers for whom the first draft is just right. But anyone who is not certifiably a Milton had better assume that the first draft is a very primitive thing. The reason is simple: Writing is difficult work. Ralph Paine, who managed Fortune in my time, used to say that anyone who said writing was easy was either a bad writer or an unregenerate liar
  • Thinking, as Voltaire avowed, is also a very tedious thing which men—or women—will do anything to avoid. So all first drafts are deeply flawed by the need to combine composition with thought. Each later draft is less demanding in this regard. Hence the writing can be better
  • There does come a time when revision is for the sake of change—when one has become so bored with the words that anything that is different looks better. But even then it may be better.
  • the lesson of Harry Luce. No one who worked for him ever again escaped the feeling that he was there looking over one's shoulder. In his hand was a pencil; down on each page one could expect, any moment, a long swishing wiggle accompanied by the comment: "This can go." Invariably it could. It was written to please the author and not the reader. Or to fill in the space. The gains from brevity are obvious; in most efforts to achieve brevity, it is the worst and dullest that goes. It is the worst and dullest that spoils the rest.
  • as he grew older, he became less and less interested in theory, more and more interested in information.
  • Reluctantly, but from a long and terrible experience, I would urge my young writers to avoid all attempts at humor
  • Only a very foolish man will use a form of language that is wholly uncertain in its effect. That is the nature of humor
  • Finally, I would come to a matter of much personal interest, intensely self-serving. It concerns the peculiar pitfalls of the writer who is dealing with presumptively difficult or technical matters
  • Economics is an example, and within the field of economics the subject of money, with the history of which I have been much concerned, is an especially good case. Any specialist who ventures to write on money with a view to making himself intelligible works under a grave moral hazard. He will be accused of oversimplification. The charge will be made by his fellow professionals, however obtuse or incompetent
  • In the case of economics there are no important propositions that cannot be stated in plain language
  • Additionally, and especially in the social sciences, much unclear writing is based on unclear or incomplete thought
  • It is possible with safety to be technically obscure about something you haven't thought out. It is impossible to be wholly clear on something you do not understand. Clarity thus exposes flaws in the thought
tongoscar

How does mother tongue affect second language acquisition? - Language Magazine - 0 views

  • Because cues that signal the beginning and ending of words can differ from language to language, a person’s native language can provide misleading information when learning to segment a second language into words.
  • “The moment we hear a new language, all of a sudden we hear a stream of sounds and don’t know where the words begin or end,” Tremblay said. “Even if we know words from the second language and can recognize them in isolation, we may not be able to locate these words in continuous speech, because a variety of processes affect how words are realized in context.”
  • Other cues, such as intonation, are harder to master and are more likely to be influenced by a speaker’s native language.
  • ...3 more annotations...
  • One of the more interesting findings is that when languages share more similarities but still have slight differences, it can be harder for second language learners to use the correct speech cues to identify words.
  • “For English speakers, the differences between English stress and French prominence are so salient that it ought to be obvious and they ought to readjust their system,”
  • Researchers also found that native French speakers who lived in France did better than native French speakers who lived in the U.S. at using French-like intonation cues to locate words in an artificial language.
Javier E

Ann Coulter Is Right to Fear the World Cup - Peter Beinart - The Atlantic - 1 views

  • Ann Coulter penned a column explaining why soccer is un-American. First, it’s collectivist. (“Individual achievement is not a big factor…blame is dispersed.”) Second, it’s effeminate. (“It’s a sport in which athletic talent finds so little expression that girls can play with boys.”) Third, it’s culturally elitist. (“The same people trying to push soccer on Americans are the ones demanding that we love HBO’s “Girls,” light-rail, Beyoncé and Hillary Clinton.”) Fourth, and most importantly, “It’s foreign…Soccer is like the metric system, which liberals also adore because it’s European.”
  • Soccer hatred, in other words, exemplifies American exceptionalism.
  • For Coulter and many contemporary conservatives, by contrast, part of what makes America exceptional is its individualism, manliness and populism
  • ...22 more annotations...
  • Coulter’s deeper point is that for America to truly be America, it must stand apart
  • The core problem with embracing soccer is that in so doing, America would become more like the rest of the world.
  • America’s own league, Major League Soccer, draws as many fans to its stadiums as do the NHL and NBA.
  • I wrote an essay entitled “The End of American Exceptionalism,” which argued that on subjects where the United States has long been seen as different, attitudes in America increasingly resemble those in Europe. Soccer is one of the best examples yet.
  • “Soccer,” Markovits and Hellerman argue, “was perceived by both native-born Americans and immigrants as a non-American activity at a time in American history when nativism and nationalism emerged to create a distinctly American self-image … if one liked soccer, one was viewed as at least resisting—if not outright rejecting—integration into America.”
  • The average age of Americans who call baseball their favorite sport is 53. Among Americans who like football best, it’s 46. Among Americans who prefer soccer, by contrast, the average age is only 37.
  • Old-stock Americans, in other words, were elevating baseball, football, and basketball into symbols of America’s distinct identity. Immigrants realized that embracing those sports offered a way to claim that identity for themselves. Clinging to soccer, by contrast, was a declaration that you would not melt.
  • why is interest in soccer rising now? Partly, because the United States is yet again witnessing mass immigration from soccer-mad nations.
  • the key shift is that America’s sports culture is less nativist. More native-born Americans now accept that a game invented overseas can become authentically American, and that the immigrants who love it can become authentically American too. Fewer believe that to have merit, something must be invented in the United States.
  • Americans today are less likely to insist that America’s way of doing things is always best. In 2002, 60 percent of Americans told the Pew Research Center that, “our culture is superior to others.” By 2011, it was down to 49 percent.
  • Americans over the age of 50 were 15 points more likely to say “our culture is superior” than were people over 50 in Germany, Spain, Britain, and France
  • Americans under 30, by contrast, were actually less likely to say “our culture is superior” than their counterparts in Germany, Spain, and Britain.
  • why didn’t soccer gain a foothold in the U.S. in the decades between the Civil War and World War I, when it was gaining dominance in Europe? Precisely because it was gaining dominance in Europe. The arbiters of taste in late 19th and early 20th century America wanted its national pastimes to be exceptional.
  • the third major pro-soccer constituency is liberals. They’re willing to embrace a European sport for the same reason they’re willing to embrace a European-style health care system: because they see no inherent value in America being an exception to the global rule
  • When the real-estate website Estately created a seven part index to determine a state’s love of soccer, it found that Washington State, Maryland, the District of Columbia, New York, and New Jersey—all bright blue—loved soccer best, while Alabama, Arkansas, North Dakota, Mississippi and Montana—all bright red—liked it least.
  • the soccer coalition—immigrants, liberals and the young—looks a lot like the Obama coalition.
  • Sports-wise, therefore, Democrats constitute an alliance between soccer and basketball fans while Republicans disproportionately follow baseball, golf, and NASCAR. Football, by far America’s most popular sport, crosses the aisle.)
  • The willingness of growing numbers of Americans to embrace soccer bespeaks their willingness to imagine a different relationship with the world. Historically, conservative foreign policy has oscillated between isolationism and imperialism. America must either retreat from the world or master it. It cannot be one among equals, bound by the same rules as everyone else
  • Exceptionalists view sports the same way. Coulter likes football, baseball, and basketball because America either plays them by itself, or—when other countries play against us—we dominate them.
  • Embracing soccer, by contrast, means embracing America’s role as merely one nation among many, without special privileges. It’s no coincidence that young Americans, in addition to liking soccer, also like the United Nations. In 2013, Pew found that Americans under 30 were 24 points more favorable to the U.N. than Americans over 50.
  • Millennials were also 23 points more likely than the elderly to say America should take its allies’ opinion into account even if means compromising our desires.
  • In embracing soccer, Americans are learning to take something we neither invented nor control, and nonetheless make it our own. It’s a skill we’re going to need in the years to come.
anonymous

The Perseverance of New York City's Wildflowers - The New York Times - 0 views

  • The Perseverance of New York City’s Wildflowers
  • A park in Williamsburg awaits the miniature beauty of its spring blossoms.
  • In Williamsburg, on a seven-acre park by the East River, spring will soon unfurl in blue blossoms
  • ...21 more annotations...
  • Cornflowers are always the first to bloom in the pollinator meadow of Marsha P. Johnson State Park, a welcome sign to bees and people that things are beginning to thaw.
  • If New York City has a warm spring, the cornflowers may open up by late April, eventually followed by orange frills of butterfly milkweed, purple spindly bee balm and yolk-yellow, black-eyed Susans that also inhabit the meadow — hardy species that can weather the salty spray that confronts life on the waterfront.
  • Not all of these flowers are native to New York, or even North America, but they have sustained themselves long enough to become naturalized
  • These species pose little threat to native wildlife, unlike more domineering introduced species such as mugwort, an herb with an intrepid rhizome system.
  • A wildflower can refer to any flowering plant that was not cultivated, intentionally planted or given human aid, yet it still managed to grow and bloom.
  • This is one of several definitions offered by the plant ecologist Donald J. Leopold in Andrew Garn’s new photo book “Wildflowers of New York City,” and one that feels particularly suited to the city and its many transplants.
  • Scarlet bee balm.
  • Ms. Lopez, who grew up on the Upper West Side near a sooty smokestack, has always longed for more green spaces in the city.
  • In February of 2020, Gov. Andrew Cuomo renamed the park after the activist Marsha P. Johnson, one of the central figures of the Stonewall riots and a co-founder of Street Transvestite Action Revolutionaries with the activist Sylvia Rivera. Ms. Johnson, who died in 1992 of undetermined causes, would have turned 75 in August 2020.
  • Mr. Garn did not intend for “Wildflowers of New York City” to be a traditional field guide for identifying flowers. Rather, his reverent portraits invite us to delight in the beauty of flowers that we more often encounter in a sidewalk crack than in a bouquet.
  • Marsha P. Johnson, a central figure of the Stonewall riots and a co-founder of Street Transvestite Action Revolutionaries
  • Ms. Johnson was known for wearing crowns of fresh flowers that she would arrange from leftover blooms and discarded daffodils from the flower district in Manhattan, where she often slept.
  • In one photo, Ms. Johnson wears a crown of roses, carnations, chrysanthemums, frilly tulips, statice and baby’s breath.
  • Although cumulous clusters of baby’s breath are now a staple of floral arrangements, the species is a wildflower native to central and Eastern Europe.
  • Ms. Lopez and STARR have criticized a proposal for a new $70 million beach scheduled to be built on Gansevoort Peninsula, near waterfronts where Ms. Rivera once lived and Ms. Johnson died. In its place, she suggests a memorial garden for Ms. Johnson, Ms. Rivera and other transgender people
  • “We will never feed enough people, we will never plant enough flowers, never be good enough to honor Sylvia and Marsha,” Ms. Lopez said. “They cared too much, even when no one cared for them.”
  • “I have candles lit always for Marsha and Sylvia, but I’m praying especially hard now that we get a plan that includes lots of flowers,” said Mariah Lopez, the executive director of Strategic Trans Alliance for Radical Reform, or STARR, an advocacy group.
  • Her dream of the park includes a range of verdant and functional spaces: a paved area where people can vogue and hold rallies, a flower garden in tribute to Ms. Johnson, a greenhouse and an apiary for bees.
  • Tansy.
  • The redesign of the park will add a new fence around the meadow, as well as interpretive signs about the pollinators who depend on its wildflowers. “What would happen if there were no bees in the world?”
  • “We have to protect them. That’s what the function of this sweet little meadow is.”
  •  
    Real life story and example of how we treat history- what stories we're telling, who we're trying to save.
runlai_jiang

What Is Synesthesia? Definition and Types - 0 views

  • The term "synesthesia" comes from the Greek words syn, which means "together", and aisthesis, which means "sensation." Synesthesia is a perception in which stimulating one sensory or cognitive pathway  causes experiences in another sense or cognitive pathway. In other words, a sense or concept is connected to a different sense or concept, such as hearing a color or tasting a word. The connection between pathways is involuntary and consistent over time, rather than conscious or arbitrary.
  • Types of SynesthesiaThere are many different types of synesthesia, but they may be categorized as falling into one of two groups: associative synesthesia and projective synesthesia. An associate feels a connection between a stimulus and a sense, w
  • There are at least 80 known types of synesthesia, but some are more common than others: Chromesthesia: In this common form of synesthesia, sounds and colors are associated with each other. For example, the musical note "D" may correspond to seeing the color green.Grapheme-color synesthesia: This is a common form of synesthesia characterized by seeing graphemes (letter or numerals) shaded with a color. Synesthetes don't associate the same colors for a grapheme as each other, although the letter "A" does appear to be red to many individuals. Persons who experience grapheme-color synesthesia sometimes report seeing impossible colors when red and green or blue and yellow graphemes appear next to each other in a word or number. Number form: A number form is a mental shape or map of numbers resulting from seeing or thinking about numbers.Lexical-gustatory synesthesia: This a rare type of synesthesia in which hearing a word results in tasting a flavor. For example, a person's name might taste like chocolate.Mirror-touch synesthesia: While rare, mirror-touch synesthesia is noteworthy because it can be disruptive to a synesthete's life. In this form of synesthesia, an individual feels the same sensation in response to a stimulus as another person. For example, seeing a person being tapped on the shoulder would cause the synesthete to feel a tap on
  • ...3 more annotations...
  • How Synesthesia WorksScientists have yet to make a definitive determination of the mechanism of synesthesia. It may be due to increased cross-talk between specialized regions of the brain. Another possible mechanism is that inhibition in a neural pathway is reduced in synesthetes, allowing multi-sensory processing of stimuli. Some researchers believe synesthesia is based on the way the brain extracts and assigns the meaning of a stimulus (ideasthesia).
  • Who Has Synesthesia?Julia Simner, a psychologist studying synesthesia at of the University of Edinburgh, estimates at least 4% of the population has synesthesia and that over 1% of people have grapheme-color synesthesia (colored numbers and letters). More women have synesthesia than men. Some research suggests the incidenc
  • Can You Develop Synesthesia?There are documented cases of non-synesthetes developing synesthesia. Specifically, head trauma, stroke, brain tumors, and temporal lobe epilepsy may produce synesthesia. Temporary synesthesia may result from exposure to the psychedelic drugs mescaline or LSD, from sensory deprivation, or from meditation.
Javier E

What's the secret to learning a second language? - Salon.com - 0 views

  • “Arabic is a language of memorization,” he said. “You just have to drill the words into your head, which unfortunately takes a lot of time.” He thought, “How can I maximize the number of words I learn in the minimum amount of time?”
  • Siebert started studying the science of memory and second-language acquisition and found two concepts that went hand in hand to make learning easier: selective learning and spaced repetition. With selective learning, you spend more time on the things you don’t know, rather than on the things you already do
  • Siebert designed his software to use spaced repetition. If you get cup right, the program will make the interval between seeing the word cup longer and longer, but it will cycle cup back in just when you’re about to forget it. If you’ve forgotten cup entirely, the cycle starts again. This system moves the words from your brain’s short-term memory into long-term memory and maximizes the number of words you can learn effectively in a period. You don’t have to cram
  • ...8 more annotations...
  • ARABIC IS ONE of the languages the U.S. Department of State dubs “extremely hard.” Chinese, Japanese, and Korean are the others. These languages’ structures are vastly different from that of English, and they are memorization-driven.
  • To help meet its language-learning goals, in 2003 the Department of Defense established the University of Maryland Center for Advanced Study of Language.
  • MICHAEL GEISLER, a vice president at Middlebury College, which runs the foremost language-immersion school in the country, was blunt: “The drill-and-kill approach we used 20 years ago doesn’t work.” He added, “The typical approach that most programs take these days—Rosetta Stone is one example—is scripted dialogue and picture association. You have a picture of the Eiffel Tower, and you have a sentence to go with it. But that’s not going to teach you the language.”
  • According to Geisler, you need four things to learn a language. First, you have to use it. Second, you have to use it for a purpose. Research shows that doing something while learning a language—preparing a cooking demonstration, creating an art project, putting on a play—stimulates an exchange of meaning that goes beyond using the language for the sake of learning it.Third, you have to use the language in context. This is where Geisler says all programs have fallen short.
  • Fourth, you have to use language in interaction with others. In a 2009 study led by Andrew Meltzoff at the University of Washington, researchers found that young children easily learned a second language from live human interaction while playing and reading books. But audio and DVD approaches with the same material, without the live interaction, fostered no learning progress at all. Two people in conversation constantly give each other feedback that can be used to make changes in how they respond.
  • our research shows that the ideal model is a blended one,” one that blends technology and a teacher. “Our latest research shows that with the proper use of technology and cognitive neuroscience, we can make language learning more efficient.” 
  • The school released its first two online programs, for French and Spanish, last year. The new courses use computer avatars for virtual collaboration; rich video of authentic, unscripted conversations with native speakers; and 3-D role-playing games in which students explore life in a city square, acting as servers and taking orders from customers in a café setting. The goal at the end of the day, as Geisler put it, is for you to “actually be able to interact with a native speaker in his own language and have him understand you, understand him, and, critically, negotiate when you don’t understand what he is saying.” 
  • The program includes the usual vocabulary lists and lessons in how to conjugate verbs, but students are also consistently immersed in images, audio, and video of people from different countries speaking with different accents. Access to actual teachers is another critical component.
kushnerha

If Philosophy Won't Diversify, Let's Call It What It Really Is - The New York Times - 0 views

  • The vast majority of philosophy departments in the United States offer courses only on philosophy derived from Europe and the English-speaking world. For example, of the 118 doctoral programs in philosophy in the United States and Canada, only 10 percent have a specialist in Chinese philosophy as part of their regular faculty. Most philosophy departments also offer no courses on Africana, Indian, Islamic, Jewish, Latin American, Native American or other non-European traditions. Indeed, of the top 50 philosophy doctoral programs in the English-speaking world, only 15 percent have any regular faculty members who teach any non-Western philosophy.
  • Given the importance of non-European traditions in both the history of world philosophy and in the contemporary world, and given the increasing numbers of students in our colleges and universities from non-European backgrounds, this is astonishing. No other humanities discipline demonstrates this systematic neglect of most of the civilizations in its domain. The present situation is hard to justify morally, politically, epistemically or as good educational and research training practice.
  • While a few philosophy departments have made their curriculums more diverse, and while the American Philosophical Association has slowly broadened the representation of the world’s philosophical traditions on its programs, progress has been minimal.
  • ...9 more annotations...
  • Many philosophers and many departments simply ignore arguments for greater diversity; others respond with arguments for Eurocentrism that we and many others have refuted elsewhere. The profession as a whole remains resolutely Eurocentric.
  • Instead, we ask those who sincerely believe that it does make sense to organize our discipline entirely around European and American figures and texts to pursue this agenda with honesty and openness. We therefore suggest that any department that regularly offers courses only on Western philosophy should rename itself “Department of European and American Philosophy.”
  • We see no justification for resisting this minor rebranding (though we welcome opposing views in the comments section to this article), particularly for those who endorse, implicitly or explicitly, this Eurocentric orientation.
  • Some of our colleagues defend this orientation on the grounds that non-European philosophy belongs only in “area studies” departments, like Asian Studies, African Studies or Latin American Studies. We ask that those who hold this view be consistent, and locate their own departments in “area studies” as well, in this case, Anglo-European Philosophical Studies.
  • Others might argue against renaming on the grounds that it is unfair to single out philosophy: We do not have departments of Euro-American Mathematics or Physics. This is nothing but shabby sophistry. Non-European philosophical traditions offer distinctive solutions to problems discussed within European and American philosophy, raise or frame problems not addressed in the American and European tradition, or emphasize and discuss more deeply philosophical problems that are marginalized in Anglo-European philosophy. There are no comparable differences in how mathematics or physics are practiced in other contemporary cultures.
  • Of course, we believe that renaming departments would not be nearly as valuable as actually broadening the philosophical curriculum and retaining the name “philosophy.” Philosophy as a discipline has a serious diversity problem, with women and minorities underrepresented at all levels among students and faculty, even while the percentage of these groups increases among college students. Part of the problem is the perception that philosophy departments are nothing but temples to the achievement of males of European descent. Our recommendation is straightforward: Those who are comfortable with that perception should confirm it in good faith and defend it honestly; if they cannot do so, we urge them to diversify their faculty and their curriculum.
  • This is not to disparage the value of the works in the contemporary philosophical canon: Clearly, there is nothing intrinsically wrong with philosophy written by males of European descent; but philosophy has always become richer as it becomes increasingly diverse and pluralistic.
  • We hope that American philosophy departments will someday teach Confucius as routinely as they now teach Kant, that philosophy students will eventually have as many opportunities to study the “Bhagavad Gita” as they do the “Republic,” that the Flying Man thought experiment of the Persian philosopher Avicenna (980-1037) will be as well-known as the Brain-in-a-Vat thought experiment of the American philosopher Hilary Putnam (1926-2016), that the ancient Indian scholar Candrakirti’s critical examination of the concept of the self will be as well-studied as David Hume’s, that Frantz Fanon (1925-1961), Kwazi Wiredu (1931- ), Lame Deer (1903-1976) and Maria Lugones will be as familiar to our students as their equally profound colleagues in the contemporary philosophical canon. But, until then, let’s be honest, face reality and call departments of European-American Philosophy what they really are.
  • For demographic, political and historical reasons, the change to a more multicultural conception of philosophy in the United States seems inevitable. Heed the Stoic adage: “The Fates lead those who come willingly, and drag those who do not.”
Javier E

Language and thought: Johnson: Does speaking German change how I see social relationshi... - 0 views

  • Roman Jakobson, a linguist, once said that “Languages differ essentially in what they must convey and not in what they may convey.” How do two-pronoun systems play into
  • this? In German, I must choose du or Sie every time I address someone. According to the logic of language shaping thought, I should therefore be more aware of social relations when I speak German.
  • A believer in the language-shapes-thought idea might argue that speaking German doesn't push me to always be more conscious of social relationships because I'm a non-native speaker, and so I haven't developed the habits of mind of lifelong German speakers. But plenty of native speakers of two-pronoun languages find this system irksome and awkward, just as I do.
  • ...2 more annotations...
  • there is another way in which the double-"you" distinction may nudge thought. It refers to what Dan Slobin, a linguist, has called “thinking for speaking”. Speakers of different languages may well see the world similarly most of the time, but when people are specifically planning to say something, different languages may temporarily force speakers to pay more attention to certain distinctions. For example, every time a German person says “you”, a little attention must be paid to formality. So split pronouns (or other features) may act as a kind of "prime" for certain thoughts or behaviours. Primes can be powerful. Every time I refer to my boss, for example, the formal "you" may prime me to be more aware of the formality and hierarchy of our relationship. So too when I must address an old friend.
  • A bigger question is whether differences between languages persist when people are not "thinking for speaking"—ie, whether they condition something we might call a robust worldview. When silently strolling down country lane, do speakers of different languages think in profoundly different ways? The popular view is “yes”, but furious debate among researchers continues.
Javier E

E.D. Hirsch Jr.'s 'Cultural Literacy' in the 21st Century - The Atlantic - 0 views

  • much of this angst can be interpreted as part of a noisy but inexorable endgame: the end of white supremacy. From this vantage point, Americanness and whiteness are fitfully, achingly, but finally becoming delinked—and like it or not, over the course of this generation, Americans are all going to have to learn a new way to be American.
  • What is the story of “us” when “us” is no longer by default “white”? The answer, of course, will depend on how aware Americans are of what they are, of what their culture already (and always) has been.
  • The thing about the list, though, was that it was—by design—heavy on the deeds and words of the “dead white males” who had formed the foundations of American culture but who had by then begun to fall out of academic fashion.
  • ...38 more annotations...
  • Conservatives thus embraced Hirsch eagerly and breathlessly. He was a stout defender of the patrimony. Liberals eagerly and breathlessly attacked him with equal vigor. He was retrograde, Eurocentric, racist, sexist.
  • Lost in all the crossfire, however, were two facts: First, Hirsch, a lifelong Democrat who considered himself progressive, believed his enterprise to be in service of social justice and equality. Cultural illiteracy, he argued, is most common among the poor and power-illiterate, and compounds both their poverty and powerlessness. Second: He was right.
  • A generation of hindsight now enables Americans to see that it is indeed necessary for a nation as far-flung and entropic as the United States, one where rising economic inequality begets worsening civic inequality, to cultivate continuously a shared cultural core. A vocabulary. A set of shared referents and symbols.
  • So, first of all, Americans do need a list. But second, it should not be Hirsch’s list. And third, it should not made the way he made his. In the balance of this essay, I want to unpack and explain each of those three statements.
  • If you take the time to read the book attached to Hirsch’s appendix, you’ll find a rather effective argument about the nature of background knowledge and public culture. Literacy is not just a matter of decoding the strings of letters that make up words or the meaning of each word in sequence. It is a matter of decoding context: the surrounding matrix of things referred to in the text and things implied by it
  • That means understanding what’s being said in public, in the media, in colloquial conversation. It means understanding what’s not being said. Literacy in the culture confers power, or at least access to power. Illiteracy, whether willful or unwitting, creates isolation from power.
  • his point about background knowledge and the content of shared public culture extends well beyond schoolbooks. They are applicable to the “texts” of everyday life, in commercial culture, in sports talk, in religious language, in politics. In all cases, people become literate in patterns—“schema” is the academic word Hirsch uses. They come to recognize bundles of concept and connotation like “Party of Lincoln.” They perceive those patterns of meaning the same way a chess master reads an in-game chessboard or the way a great baseball manager reads an at bat. And in all cases, pattern recognition requires literacy in particulars.
  • Lots and lots of particulars. This isn’t, or at least shouldn’t be, an ideologically controversial point. After all, parents on both left and right have come to accept recent research that shows that the more spoken words an infant or toddler hears, the more rapidly she will learn and advance in school. Volume and variety matter. And what is true about the vocabulary of spoken or written English is also true, one fractal scale up, about the vocabulary of American culture.
  • those who demonized Hirsch as a right-winger missed the point. Just because an endeavor requires fluency in the past does not make it worshipful of tradition or hostile to change.
  • radicalism is made more powerful when garbed in traditionalism. As Hirsch put it: “To be conservative in the means of communication is the road to effectiveness in modern life, in whatever direction one wishes to be effective.”
  • Hence, he argued, an education that in the name of progressivism disdains past forms, schema, concepts, figures, and symbols is an education that is in fact anti-progressive and “helps preserve the political and economic status quo.” This is true. And it is made more urgently true by the changes in American demography since Hirsch gave us his list in 1987.
  • If you are an immigrant to the United States—or, if you were born here but are the first in your family to go to college, and thus a socioeconomic new arrival; or, say, a black citizen in Ferguson, Missouri deciding for the first time to participate in a municipal election, and thus a civic neophyte—you have a single overriding objective shared by all immigrants at the moment of arrival: figure out how stuff really gets done here.
  • So, for instance, a statement like “One hundred and fifty years after Appomattox, our house remains deeply divided” assumes that the reader knows that Appomattox is both a place and an event; that the event signified the end of a war; that the war was the Civil War and had begun during the presidency of a man, Abraham Lincoln, who earlier had famously declared that “a house divided against itself cannot stand”; that the divisions then were in large part about slavery; and that the divisions today are over the political, social, and economic legacies of slavery and how or whether we are to respond to those legacies.
  • The more serious challenge, for Americans new and old, is to make a common culture that’s greater than the sum of our increasingly diverse parts. It’s not enough for the United States to be a neutral zone where a million little niches of identity might flourish; in order to make our diversity a true asset, Americans need those niches to be able to share a vocabulary. Americans need to be able to have a broad base of common knowledge so that diversity can be most fully activated.
  • it’s not just newcomers who need greater command of common knowledge. People whose families have been here ten generations are often as ignorant about American traditions, mores, history, and idioms as someone “fresh off the boat.”
  • But why a list, one might ask? Aren’t lists just the very worst form of rote learning and standardized, mechanized education? Well, yes and no.
  • as the pool of potential culture-makers has widened, the modes of culture creation have similarly shifted away from hierarchies and institutions to webs and networks. Wikipedia is the prime embodiment of this reality, both in how the online encyclopedia is crowd-created and how every crowd-created entry contains links to other entries.
  • It is true that lists alone, with no teaching to bring them to life and no expectation that they be connected to a broader education, are somewhere between useless and harmful.
  • since I started writing this essay, dipping into the list has become a game my high-school-age daughter and I play together.
  • I’ll name each of those entries, she’ll describe what she thinks to be its meaning. If she doesn’t know, I’ll explain it and give some back story. If I don’t know, we’ll look it up together. This of course is not a good way for her teachers to teach the main content of American history or English. But it is definitely a good way for us both to supplement what school should be giving her.
  • And however long we end up playing this game, it is already teaching her a meta-lesson about the importance of cultural literacy. Now anytime a reference we’ve discussed comes up in the news or on TV or in dinner conversation, she can claim ownership. Sometimes she does so proudly, sometimes with a knowing look. My bet is that the satisfaction of that ownership, and the value of it, will compound as the years and her education progress.
  • The trouble is, there are also many items on Hirsch’s list that don’t seem particularly necessary for entry into today’s civic and economic mainstream.
  • Which brings us back to why diversity matters. The same diversity that makes it necessary to have and to sustain a unifying cultural core demands that Americans make the core less monochromatic, more inclusive, and continuously relevant for contemporary life
  • it’s worth unpacking the baseline assumption of both Hirsch’s original argument and the battles that erupted around it. The assumption was that multiculturalism sits in polar opposition to a traditional common culture, that the fight between multiculturalism and the common culture was zero-sum.
  • As scholars like Ronald Takaki made clear in books like A Different Mirror, the dichotomy made sense only to the extent that one imagined that nonwhite people had had no part in shaping America until they started speaking up in the second half of the twentieth century.
  • The truth, of course, is that since well before the formation of the United States, the United States has been shaped by nonwhites in its mores, political structures, aesthetics, slang, economic practices, cuisine, dress, song, and sensibility.
  • In its serious forms, multiculturalism never asserted that every racial group should have its own sealed and separate history or that each group’s history was equally salient to the formation of the American experience. It simply claimed that the omni-American story—of diversity and hybridity—was the legitimate American story.
  • as Nathan Glazer has put it (somewhat ruefully), “We are all multiculturalists now.” Americans have come to see—have chosen to see—that multiculturalism is not at odds with a single common culture; it is a single common culture.
  • it is true that in a finite school year, say, with finite class time and books of finite heft, not everything about everyone can be taught. There are necessary trade-offs. But in practice, recognizing the true and longstanding diversity of American identity is not an either-or. Learning about the internment of Japanese Americans does not block out knowledge of D-Day or Midway. It is additive.
  • As more diverse voices attain ever more forms of reach and power we need to re-integrate and reimagine Hirsch’s list of what literate Americans ought to know.
  • To be clear: A 21st-century omni-American approach to cultural literacy is not about crowding out “real” history with the perishable stuff of contemporary life. It’s about drawing lines of descent from the old forms of cultural expression, however formal, to their progeny, however colloquial.
  • Nor is Omni-American cultural literacy about raising the “self-esteem” of the poor, nonwhite, and marginalized. It’s about raising the collective knowledge of all—and recognizing that the wealthy, white, and powerful also have blind spots and swaths of ignorance
  • What, then, would be on your list? It’s not an idle question. It turns out to be the key to rethinking how a list should even get made.
  • the Internet has transformed who makes culture and how. As barriers to culture creation have fallen, orders of magnitude more citizens—amateurs—are able to shape the culture in which we must all be literate. Cat videos and Star Trek fan fiction may not hold up long beside Toni Morrison. But the entry of new creators leads to new claims of right: The right to be recognized. The right to be counted. The right to make the means of recognition and accounting.
  • so any endeavor that makes it easier for those who do not know the memes and themes of American civic life to attain them closes the opportunity gap. It is inherently progressive.
  • This will be a list of nodes and nested networks. It will be a fractal of associations, which reflects far more than a linear list how our brains work and how we learn and create. Hirsch himself nodded to this reality in Cultural Literacy when he described the process he and his colleagues used for collecting items for their list, though he raised it by way of pointing out the danger of infinite regress.
  • His conclusion, appropriate to his times, was that you had to draw boundaries somewhere with the help of experts. My take, appropriate to our times, is that Americans can draw not boundaries so much as circles and linkages, concept sets and pathways among them.
  • Because 5,000 or even 500 items is too daunting a place to start, I ask here only for your top ten. What are ten things every American—newcomer or native born, affluent or indigent—should know? What ten things do you feel are both required knowledge and illuminating gateways to those unenlightened about American life? Here are my entries: Whiteness The Federalist Papers The Almighty Dollar Organized labor Reconstruction Nativism The American Dream The Reagan Revolution DARPA A sucker born every minute
Javier E

Arianna Huffington's Improbable, Insatiable Content Machine - The New York Times - 0 views

  • Display advertising — wherein advertisers pay each time an ad is shown to a reader — still dominates the market. But native advertising, designed to match the look and feel of the editorial content it runs alongside, has been on the rise for years.
  • the ethical debate in the media world is over. Socintel360, a research firm, predicts that spending on native advertising in the United States will more than double in the next four years to $18.4 billion.
  • news start-ups today are like cable-television networks in the early ’80s: small, pioneering companies that will be handsomely rewarded for figuring out how to monetize your attention through a new medium. If this is so, the size of The Huffington Post’s audience could one day justify that $1 billion valuation.
Javier E

Joshua Foer: John Quijada and Ithkuil, the Language He Invented : The New Yorker - 2 views

  • Languages are something of a mess. They evolve over centuries through an unplanned, democratic process that leaves them teeming with irregularities, quirks, and words like “knight.” No one who set out to design a form of communication would ever end up with anything like English, Mandarin, or any of the more than six thousand languages spoken today.“Natural languages are adequate, but that doesn’t mean they’re optimal,” John Quijada, a fifty-four-year-old former employee of the California State Department of Motor Vehicles, told me. In 2004, he published a monograph on the Internet that was titled “Ithkuil: A Philosophical Design for a Hypothetical Language.” Written like a linguistics textbook, the fourteen-page Web site ran to almost a hundred and sixty thousand words. It documented the grammar, syntax, and lexicon of a language that Quijada had spent three decades inventing in his spare time. Ithkuil had never been spoken by anyone other than Quijada, and he assumed that it never would be.
  • his “greater goal” was “to attempt the creation of what human beings, left to their own devices, would never create naturally, but rather only by conscious intellectual effort: an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language.”
  • Ithkuil, one Web site declared, “is a monument to human ingenuity and design.” It may be the most complete realization of a quixotic dream that has entranced philosophers for centuries: the creation of a more perfect language.
  • ...25 more annotations...
  • Since at least the Middle Ages, philosophers and philologists have dreamed of curing natural languages of their flaws by constructing entirely new idioms according to orderly, logical principles.
  • nventing new forms of speech is an almost cosmic urge that stems from what the linguist Marina Yaguello, the author of “Lunatic Lovers of Language,” calls “an ambivalent love-hate relationship.” Language creation is pursued by people who are so in love with what language can do that they hate what it doesn’t. “I don’t believe any other fantasy has ever been pursued with so much ardor by the human spirit, apart perhaps from the philosopher’s stone or the proof of the existence of God; or that any other utopia has caused so much ink to flow, apart perhaps from socialism,”
  • What if, they wondered, you could create a universal written language that could be understood by anyone, a set of “real characters,” just as the creation of Arabic numerals had done for counting? “This writing will be a kind of general algebra and calculus of reason, so that, instead of disputing, we can say that ‘we calculate,’ ” Leibniz wrote, in 1679.
  • In his “Essay Towards a Real Character, and a Philosophical Language,” from 1668, Wilkins laid out a sprawling taxonomic tree that was intended to represent a rational classification of every concept, thing, and action in the universe. Each branch along the tree corresponded to a letter or a syllable, so that assembling a word was simply a matter of tracing a set of forking limbs
  • Solresol, the creation of a French musician named Jean-François Sudre, was among the first of these universal languages to gain popular attention. It had only seven syllables: Do, Re, Mi, Fa, So, La, and Si. Words could be sung, or performed on a violin. Or, since the language could also be translated into the seven colors of the rainbow, sentences could be woven into a textile as a stream of colors.
  • “I had this realization that every individual language does at least one thing better than every other language,” he said. For example, the Australian Aboriginal language Guugu Yimithirr doesn’t use egocentric coördinates like “left,” “right,” “in front of,” or “behind.” Instead, speakers use only the cardinal directions. They don’t have left and right legs but north and south legs, which become east and west legs upon turning ninety degrees
  • Among the Wakashan Indians of the Pacific Northwest, a grammatically correct sentence can’t be formed without providing what linguists refer to as “evidentiality,” inflecting the verb to indicate whether you are speaking from direct experience, inference, conjecture, or hearsay.
  • Quijada began wondering, “What if there were one single language that combined the coolest features from all the world’s languages?”
  • he started scribbling notes on an entirely new grammar that would eventually incorporate not only Wakashan evidentiality and Guugu Yimithirr coördinates but also Niger-Kordofanian aspectual systems, the nominal cases of Basque, the fourth-person referent found in several nearly extinct Native American languages, and a dozen other wild ways of forming sentences.
  • he discovered “Metaphors We Live By,” a seminal book, published in 1980, by the cognitive linguists George Lakoff and Mark Johnson, which argues that the way we think is structured by conceptual systems that are largely metaphorical in nature. Life is a journey. Time is money. Argument is war. For better or worse, these figures of speech are profoundly embedded in how we think.
  • I asked him if he could come up with an entirely new concept on the spot, one for which there was no word in any existing language. He thought about it for a moment. “Well, no language, as far as I know, has a single word for that chin-stroking moment you get, often accompanied by a frown on your face, when someone expresses an idea that you’ve never thought of and you have a moment of suddenly seeing possibilities you never saw before.” He paused, as if leafing through a mental dictionary. “In Ithkuil, it’s ašţal.”
  • Many conlanging projects begin with a simple premise that violates the inherited conventions of linguistics in some new way. Aeo uses only vowels. Kēlen has no verbs. Toki Pona, a language inspired by Taoist ideals, was designed to test how simple a language could be. It has just a hundred and twenty-three words and fourteen basic sound units. Brithenig is an answer to the question of what English might have sounded like as a Romance language, if vulgar Latin had taken root on the British Isles. Láadan, a feminist language developed in the early nineteen-eighties, includes words like radíidin, defined as a “non-holiday, a time allegedly a holiday but actually so much a burden because of work and preparations that it is a dreaded occasion; especially when there are too many guests and none of them help.”
  • most conlangers come to their craft by way of fantasy and science fiction. J. R. R. Tolkien, who called conlanging his “secret vice,” maintained that he created the “Lord of the Rings” trilogy for the primary purpose of giving his invented languages, Quenya, Sindarin, and Khuzdul, a universe in which they could be spoken. And arguably the most commercially successful invented language of all time is Klingon, which has its own translation of “Hamlet” and a dictionary that has sold more than three hundred thousand copies.
  • He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.
  • Brown based the grammar for his ten-thousand-word language, called Loglan, on the rules of formal predicate logic used by analytical philosophers. He hoped that, by training research subjects to speak Loglan, he might turn them into more logical thinkers. If we could change how we think by changing how we speak, then the radical possibility existed of creating a new human condition.
  • today the stronger versions of the Sapir-Whorf hypothesis have “sunk into . . . disrepute among respectable linguists,” as Guy Deutscher writes, in “Through the Looking Glass: Why the World Looks Different in Other Languages.” But, as Deutscher points out, there is evidence to support the less radical assertion that the particular language we speak influences how we perceive the world. For example, speakers of gendered languages, like Spanish, in which all nouns are either masculine or feminine, actually seem to think about objects differently depending on whether the language treats them as masculine or feminine
  • The final version of Ithkuil, which Quijada published in 2011, has twenty-two grammatical categories for verbs, compared with the six—tense, aspect, person, number, mood, and voice—that exist in English. Eighteen hundred distinct suffixes further refine a speaker’s intent. Through a process of laborious conjugation that would befuddle even the most competent Latin grammarian, Ithkuil requires a speaker to home in on the exact idea he means to express, and attempts to remove any possibility for vagueness.
  • Every language has its own phonemic inventory, or library of sounds, from which a speaker can string together words. Consonant-poor Hawaiian has just thirteen phonemes. English has around forty-two, depending on dialect. In order to pack as much meaning as possible into each word, Ithkuil has fifty-eight phonemes. The original version of the language included a repertoire of grunts, wheezes, and hacks that are borrowed from some of the world’s most obscure tongues. One particular hard-to-make clicklike sound, a voiceless uvular ejective affricate, has been found in only a few other languages, including the Caucasian language Ubykh, whose last native speaker died in 1992.
  • Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.
  • “I wanted to use Ithkuil to show how you would discuss philosophy and emotional states transparently,” Quijada said. To attempt to translate a thought into Ithkuil requires investigating a spectrum of subtle variations in meaning that are not recorded in any natural language. You cannot express a thought without first considering all the neighboring thoughts that it is not. Though words in Ithkuil may sound like a hacking cough, they have an inherent and unavoidable depth. “It’s the ideal language for political and philosophical debate—any forum where people hide their intent or obfuscate behind language,” Quijada co
  • In Ithkuil, the difference between glimpsing, glancing, and gawking is the mere flick of a vowel. Each of these distinctions is expressed simply as a conjugation of the root word for vision. Hunched over the dining-room table, Quijada showed me how he would translate “gawk” into Ithkuil. First, though, since words in Ithkuil are assembled from individual atoms of meaning, he had to engage in some introspection about what exactly he meant to say.For fifteen minutes, he flipped backward and forward through his thick spiral-bound manuscript, scratching his head, pondering each of the word’s aspects, as he packed the verb with all of gawking’s many connotations. As he assembled the evolving word from its constituent meanings, he scribbled its pieces on a notepad. He added the “second degree of the affix for expectation of outcome” to suggest an element of surprise that is more than mere unpreparedness but less than outright shock, and the “third degree of the affix for contextual appropriateness” to suggest an element of impropriety that is less than scandalous but more than simply eyebrow-raising. As he rapped his pen against the notepad, he paged through his manuscript in search of the third pattern of the first stem of the root for “shock” to suggest a “non-volitional physiological response,” and then, after several moments of contemplation, he decided that gawking required the use of the “resultative format” to suggest “an event which occurs in conjunction with the conflated sense but is also caused by it.” He eventually emerged with a tiny word that hardly rolled off the tongue: apq’uxasiu. He spoke the first clacking syllable aloud a couple of times before deciding that he had the pronunciation right, and then wrote it down in the script he had invented for printed Ithkuil:
  • “You can make up words by the millions to describe concepts that have never existed in any language before,” he said.
  • Neither Sapir nor Whorf formulated a definitive version of the hypothesis that bears their names, but in general the theory argues that the language we speak actually shapes our experience of reality. Speakers of different languages think differently. Stronger versions of the hypothesis go even further than this, to suggest that language constrains the set of possible thoughts that we can have. In 1955, a sociologist and science-fiction writer named James Cooke Brown decided he would test the Sapir-Whorf hypothesis by creating a “culturally neutral” “model language” that might recondition its speakers’ brains.
  • “We think that when a person learns Ithkuil his brain works faster,” Vishneva told him, in Russian. She spoke through a translator, as neither she nor Quijada was yet fluent in their shared language. “With Ithkuil, you always have to be reflecting on yourself. Using Ithkuil, we can see things that exist but don’t have names, in the same way that Mendeleyev’s periodic table showed gaps where we knew elements should be that had yet to be discovered.”
  • Lakoff, who is seventy-one, bearded, and, like Quijada, broadly built, seemed to have read a fair portion of the Ithkuil manuscript and familiarized himself with the language’s nuances.“There are a whole lot of questions I have about this,” he told Quijada, and then explained how he felt Quijada had misread his work on metaphor. “Metaphors don’t just show up in language,” he said. “The metaphor isn’t in the word, it’s in the idea,” and it can’t be wished away with grammar.“For me, as a linguist looking at this, I have to say, ‘O.K., this isn’t going to be used.’ It has an assumption of efficiency that really isn’t efficient, given how the brain works. It misses the metaphor stuff. But the parts that are successful are really nontrivial. This may be an impossible language,” he said. “But if you think of it as a conceptual-art project I think it’s fascinating.”
Javier E

Linguists identify 15,000-year-old 'ultraconserved words' - The Washington Post - 0 views

  • You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes! It’s an odd little speech. But if you went back 15,000 years and spoke these words to hunter-gatherers in Asia in any one of hundreds of modern languages, there is a chance they would understand at least some of what you were saying.
  • That’s because all of the nouns, verbs, adjectives and adverbs in the four sentences are words that have descended largely unchanged from a language that died out as the glaciers retreated at the end of the last Ice Age. Those few words mean the same thing, and sound almost the same, as they did then.
  • A team of researchers has come up with a list of two dozen “ultraconserved words” that have survived 150 centuries. It includes some predictable entries: “mother,” “not,” “what,” “to hear” and “man.” It also contains surprises: “to flow,” “ashes” and “worm.”
  • ...2 more annotations...
  • The existence of the long-lived words suggests there was a “proto-Eurasiatic” language that was the common ancestor to about 700 contemporary languages that are the native tongues of more than half the world’s people.
  • In all, “proto-Eurasiatic” gave birth to seven language families. Several of the world’s important language families, however, fall outside that lineage, such as the one that includes Chinese and Tibetan; several African language families, and those of American Indians and Australian aborigines.
1 - 20 of 62 Next › Last »
Showing 20 items per page