Skip to main content

Home/ TOK Friends/ Group items matching "grammar" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Logical punctuation: Should we start placing commas outside quotation marks? - 1 views

  • For at least two centuries, it has been standard practice in the United States to place commas and periods inside of quotation marks. This rule still holds for professionally edited prose: what you'll find in Slate, the New York Times, the Washington Post— a
  • in copy-editor-free zones—the Web and emails, student papers, business memos—with increasing frequency, commas and periods find themselves on the outside of quotation marks, looking in. A punctuation paradigm is shifting.
  • you can find copious examples of the "outside" technique—which readers of Virginia Woolf and The Guardian will recognize as the British style—no further away than your Twitter or Facebook feed.
  • ...3 more annotations...
  • the main reason is that the British way simply makes more sense. Indeed, since at least the 1960s a common designation for that style has been "logical punctuation."
  • American style is inconsistent, moreover, because when it comes to other punctuation marks—semicolons, colons, exclamation points, question marks, dashes—we follow British/logical protocol.
  • If it seems hard or even impossible to defend the American way on the merits, that's probably because it emerged from aesthetic, not logical, considerations
Javier E

Why Save a Language? - NYTimes.com - 0 views

  • “TELL me, why should we care?” he asks.
  • if indigenous people want to give up their ancestral language to join the modern world, why should we consider it a tragedy? Languages have always died as time has passed. What’s so special about a language?
  • The answer I’m supposed to give is that each language, in the way it applies words to things and in the way its grammar works, is a unique window on the world.
  • ...12 more annotations...
  • But the question is whether such infinitesimal differences, perceptible only in a laboratory, qualify as worldviews — cultural standpoints or ways of thinking that we consider important. I think the answer is no.
  • experiments do show that a language can have a fascinating effect on how its speakers think
  • If a language dies, a fascinating way of thinking dies along with it.
  • One psychologist argued some decades ago that this meant that Chinese makes a person less sensitive to such distinctions, which, let’s face it, is discomfitingly close to saying Chinese people aren’t as quick on the uptake as the rest of us. The truth is more mundane: Hypotheticality and counterfactuality are established more by context in Chinese than in English.
  • extrapolating cognitive implications from language differences is a delicate business.
  • But if a language is not a worldview, what do we tell the guy in the lecture hall? Should we care that in 100 years only about 600 of the current 6,000 languages may be still spoken?
  • The answer is still yes, but for other reasons.
  • First, a central aspect of any culture’s existence as a coherent entity is the fact of its having its own language, regardless of what the language happens to be like
  • because language is so central to being human, to have a language used only with certain other people is a powerful tool for connection and a sense of community.
  • Second, languages are scientifically interesting even if they don’t index cultural traits. They offer variety equivalent to the diversity of the world’s fauna and flora.
  • As with any other feature of the natural world, such variety tests and expands our sense of the possible, of what is “normal.”
  • Cultures, to be sure, show how we are different. Languages, however, are variations on a worldwide, cross-cultural perception of this thing called life.
Javier E

Speaking to My Father in a Dead Dialect - NYTimes.com - 1 views

  • After his death, I would hear my father’s voice but didn’t know how to respond. When I imagined myself speaking to him in English, it sounded pedantic and prissy. Answering in Italian was no less stilted, either when I tried to revive my Calabrian or when I used the textbook grammar that was unnatural to both of us. I had so much to tell him but no way to say it, a reflection of our relationship during his lifetime. Without his words, I was losing a way to describe the world.
Javier E

What's behind the confidence of the incompetent? This suddenly popular psychological phenomenon. - The Washington Post - 0 views

  • Someone who has very little knowledge in a subject claims to know a lot. That person might even boast about being an expert.
  • This phenomenon has a name: the Dunning-Kruger effect. It’s not a disease, syndrome or mental illness; it is present in everybody to some extent, and it’s been around as long as human cognition, though only recently has it been studied and documented in social psychology.
  • Charles Darwin followed that up in 1871 with “ignorance more frequently begets confidence than does knowledge.”
  • ...10 more annotations...
  • Put simply, incompetent people think they know more than they really do, and they tend to be more boastful about it.
  • To test Darwin’s theory, the researchers quizzed people on several topics, such as grammar, logical reasoning and humor. After each test, they asked the participants how they thought they did. Specifically, participants were asked how many of the other quiz-takers they beat.
  • Time after time, no matter the subject, the people who did poorly on the tests ranked their competence much higher
  • On average, test takers who scored as low as the 10th percentile ranked themselves near the 70th percentile. Those least likely to know what they were talking about believed they knew as much as the experts.
  • Dunning and Kruger’s results have been replicated in at least a dozen different domains: math skills, wine tasting, chess, medical knowledge among surgeons and firearm safety among hunters.
  • Even though President Trump’s statements are rife with errors, falsehoods or inaccuracies, he expresses great confidence in his aptitude. He says he does not read extensively because he solves problems “with very little knowledge other than the knowledge I [already] had.” He has said in interviews he doesn’t read lengthy reports because “I already know exactly what it is.”
  • He has “the best words” and cites his “high levels of intelligence” in rejecting the scientific consensus on climate change. Decades ago, he said he could end the Cold War: “It would take an hour and a half to learn everything there is to learn about missiles,” Trump told The Washington Post’s Lois Romano over dinner in 1984. “I think I know most of it anyway.”
  • Whether people want to understand “the other side” or they’re just looking for an epithet, the Dunning-Kruger effect works as both, Dunning said, which he believes explains the rise of interest.
  • Dunning says the effect is particularly dangerous when someone with influence or the means to do harm doesn’t have anyone who can speak honestly about their mistakes.
  • Not surprisingly (though no less concerning), Dunning’s follow-up research shows the poorest performers are also the least likely to accept criticism or show interest in self improvement.
anonymous

Want to help your child succeed in school? Add language to the math, reading mix -- ScienceDaily - 0 views

  • Research shows that the more skills children bring with them to kindergarten -- in basic math, reading, even friendship and cooperation -- the more likely they will succeed in those same areas in school.
  • Now it's time to add language to that mix of skills, says a new University of Washington-led study. Not only does a child's use of vocabulary and grammar predict future proficiency with the spoken and written word, but it also affects performance in other subject areas.
  • The team analyzed academic and behavioral assessments, assigned standardized scores and looked at how scores correlated in grades 1, 3, and 5. Growth curve modeling allowed the team to look at children's levels of performance across time and investigate rates of change at specific times in elementary school.
  • ...4 more annotations...
  • Reading ability in kindergarten predicted reading, math and language skills later on; and math proficiency correlated with math and reading performance over time.
  • Measuring the impact of one skill on another, in addition to measuring growth in the same skill, provides more of a "whole child" perspective, Pace said. A child who enters school with little exposure to number sense or spatial concepts but with strong social skills may benefit from that emotional buffer.
  • Researchers expected to find that the effects of kindergarten readiness would wear off by third grade, the time when elementary school curriculum transitions from introducing foundational skills to helping students apply those skills as they delve deeper into content areas. But according to the study, children's performance in kindergarten continues to predict their performance in grades three through five.
  • The study also represents an opportunity to rethink what skills are considered measures of kindergarten-readiness, she said.
kushnerha

BBC - Future - The secret "anti-languages" you're not supposed to know - 2 views

  • speak an English “anti-language”. Since at least Tudor times, secret argots have been used in the underworld of prisoners, escaped slaves and criminal gangs as a way of confusing and befuddling the authorities.Thieves’ Cant, Polari, and Gobbledygook (yes, it’s a real form of slang) are just a few of the examples from the past – but anti-languages are mercurial beasts that are forever evolving into new and more vibrant forms.
  • A modern anti-language could very well be spoken on the street outside your house. Unless you yourself are a member of the “anti-society”, the strange terms would sound like nonsense. Yet those words may have nevertheless influenced your swear words, the comedy you enjoy and the music on your iPod – without you even realising the shady interactions that shaped them.
  • One of the first detailed records of an anti-language comes from a 16th Century magistrate called Thomas Harman. Standing at his front door, he offered food and money to passing beggars in return for nothing more than words. “He would say 'either I throw you in prison or you give me your Cant,'”
  • ...15 more annotations...
  • “Slang may not represent us at our best, or our most admirable, but it represents us as human beings with anger, fear, self-aggrandisement, and our obsession with sex and bodily parts.”
  • This clever, playful use of metaphor would come to define anti-languages for Halliday. As you could see from the dialogue between the two Elizabethan ruffians, the strange, nonsensical words render a sentence almost impossible to comprehend for outsiders, and the more terms you have, the harder it is for an outsider to learn the code. It is the reason that selling words to the police can be heavily punished among underworld gangs.
  • All borrow the grammar of the mother language but replace words (“London”, “purse”, “money”, “alehouse”) with another, elliptical term (“Rome”, “bounge”, “lower”, “bowsing ken”). Often, the anti-language may employ dozens of terms that have blossomed from a single concept – a feature known as “over-lexicalisation”. Halliday points to at least 20 terms that Elizabethan criminals used to describe fellow thieves, for instance
  • Similarly, the Kolkata underworld had 41 words for police and more than 20 for bomb. Each anti-society may have its own way of generating new terms; often the terms are playful metaphors (such as “bawdy basket”), but they can also be formed from existing words by swapping around or inserting syllables – “face” might become “ecaf”, for instance.
  • striking similarities in the patois spoken by all three underground groups and the ways it shaped their interactions.
  • “The better you are, the higher the status between those users,” explains Martin Montgomery, author of An Introduction to Language and Society.
  • Halliday doubted that secrecy was the only motive for building an anti-language, though; he found that it also helps define a hierarchy within the “anti-society”. Among the Polish prisoners, refusing to speak the lingo could denigrate you to the lowest possible rung of the social ladder, the so-called “suckers”.
  • The concept of an anti-language throws light on many of the vibrant slangs at the edges of society, from Cockney rhyming slang and Victorian “Gobbledygook” to the “Mobspeak” of the Mafia and “Boobslang” found uniquely in New Zealand prisons. The breadth and range of the terms can be astonishing; a lexicography of Boobslang reaches more than 200 pages, with 3,000 entries covering many areas of life.
  • Consider Polari. Incorporating elements of criminal cants, the gypsy Romani language, and Italian words, it was eventually adopted by the gay community of early 20th Century Britain, when homosexuality was still illegal. (Taking a “vada” at a “bona omi” for instance, means take a look at the good-looking man). Dropping an innocent term into a conversation would have been a way of identifying another gay man, without the risk of incriminating yourself among people who were not in the know.
  • His success is a startling illustration of the power of an anti-language to subvert – using the establishment's prudish "Auntie"  to broadcast shocking scenes of gay culture, two years before the Sexual Offences Act decriminalised homosexuality. The show may have only got the green light thanks to the fact that the radio commissioners either didn’t understand the connotations
  • the song Girl Loves Me on David Bowie’s latest album was written as a combination of Polari and Nadsat, the fictional anti-language in Anthony Burgess’s A Clockwork Orange.
  • Montgomery thinks we can see a similar process in the lyrics of hip-hop music. As with the other anti-languages, you can witness the blossoming of words for the illegal activities that might accompany gang culture. “There are so many words for firearm, for different kinds of drug, for money,”
  • Again, the imaginitive terms lend themselve to artistic use. “There’s quite often a playful element you elaborate new terms for old,” Montgomery says. “To use broccoli as a word for a drug – you take a word from the mainstream and convert it to new use and it has semi-humorous twist to it.”
  • He thinks that the web will only encourage the creation of slang that share some of the qualities of anti-languages; you just need to look at the rich online vocabulary that has emerged to describe prostitution;
  • new, metaphorical forms of speech will also proliferate in areas threatened by state censorship; already, you can see a dozen euphemisms flourishing in place of every term that is blocked from a search engine or social network.  If we can learn anything from this rich history of criminal cants, it is the enormous resilience of human expression in the face of oppression.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

My dad predicted Trump in 1985 - it's not Orwell, he warned, it's Brave New World | Media | The Guardian - 2 views

  • But an image? One never says a picture is true or false. It either captures your attention or it doesn’t. The more TV we watched, the more we expected – and with our finger on the remote, the more we demanded – that not just our sitcoms and cop procedurals and other “junk TV” be entertaining but also our news and other issues of import.
  • This was, in spirit, the vision that Huxley predicted way back in 1931, the dystopia my father believed we should have been watching out for. He wrote:
  • What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture.
  • ...17 more annotations...
  • Today, the average weekly screen time for an American adult – brace yourself; this is not a typo – is 74 hours (and still going up)
  • The soundbite has been replaced by virality, meme, hot take, tweet. Can serious national issues really be explored in any coherent, meaningful way in such a fragmented, attention-challenged environment?
  • how engaged can any populace be when the most we’re asked to do is to like or not like a particular post, or “sign” an online petition?
  • How seriously should anyone take us, or should we take ourselves, when the “optics” of an address or campaign speech – raucousness, maybe actual violence, childishly attention-craving gestures or facial expressions – rather than the content of the speech determines how much “airtime” it gets, and how often people watch, share and favorite it?
  • Our public discourse has become so trivialized, it’s astounding that we still cling to the word “debates” for what our presidential candidates do onstage when facing each other.
  • Who can be shocked by the rise of a reality TV star, a man given to loud, inflammatory statements, many of which are spectacularly untrue but virtually all of which make for what used to be called “good television”?
  • Who can be appalled when the coin of the realm in public discourse is not experience, thoughtfulness or diplomacy but the ability to amuse – no matter how maddening or revolting the amusement?
  • “Television is a speed-of-light medium, a present-centered medium,” my father wrote. “Its grammar, so to say, permits no access to the past … history can play no significant role in image politics. For history is of value only to someone who takes seriously the notion that there are patterns in the past which may provide the present with nourishing traditions.”
  • Later in that passage, Czesław Miłosz, winner of the Nobel prize for literature, is cited for remarking in his 1980 acceptance speech that that era was notable for “a refusal to remember”; my father notes Miłosz referencing “the shattering fact that there are now more than one hundred books in print that deny that the Holocaust ever took place”.
  • “An Orwellian world is much easier to recognize, and to oppose, than a Huxleyan,” my father wrote. “Everything in our background has prepared us to know and resist a prison when the gates begin to close around us … [but] who is prepared to take arms against a sea of amusements?”
  • I wish I could tell you that, for all his prescience, my father also supplied a solution. He did no
  • First: treat false allegations as an opportunity. Seek information as close to the source as possible.
  • Second: don’t expect “the media” to do this job for you. Some of its practitioners do, brilliantly and at times heroically. But most of the media exists to sell you things.
  • Finally, and most importantly, it should be the responsibility of schools to make children aware of our information environments, which in many instances have become our entertainment environments
  • We must teach our children, from a very young age, to be skeptics, to listen carefully, to assume everyone is lying about everything. (Well, maybe not everyone.)
  • “what is required of us now is a new era of responsibility … giving our all to a difficult task. This is the price and the promise of citizenship.”
  • we need more than just hope for a way out. We need a strategy, or at least some tactics.
Javier E

Fight the Future - The Triad - 1 views

  • In large part because our major tech platforms reduced the coefficient of friction (μ for my mechanics nerd posse) to basically zero. QAnons crept out of the dark corners of the web—obscure boards like 4chan and 8kun—and got into the mainstream platforms YouTube, Facebook, Instagram, and Twitter.
  • Why did QAnon spread like wildfire in America?
  • These platforms not only made it easy for conspiracy nuts to share their crazy, but they used algorithms that actually boosted the spread of crazy, acting as a force multiplier.
  • ...24 more annotations...
  • So it sounds like a simple fix: Impose more friction at the major platform level and you’ll clean up the public square.
  • But it’s not actually that simple because friction runs counter to the very idea of the internet.
  • The fundamental precept of the internet is that it reduces marginal costs to zero. And this fact is why the design paradigm of the internet is to continually reduce friction experienced by users to zero, too. Because if the second unit of everything is free, then the internet has a vested interest in pushing that unit in front of your eyeballs as smoothly as possible.
  • the internet is “broken,” but rather it’s been functioning exactly as it was designed to:
  • Perhaps more than any other job in the world, you do not want the President of the United States to live in a frictionless state of posting. The Presidency is not meant to be a frictionless position, and the United States government is not a frictionless entity, much to the chagrin of many who have tried to change it. Prior to this administration, decisions were closely scrutinized for, at the very least, legality, along with the impact on diplomacy, general norms, and basic grammar. This kind of legal scrutiny and due diligence is also a kind of friction--one that we now see has a lot of benefits. 
  • The deep lesson here isn’t about Donald Trump. It’s about the collision between the digital world and the real world.
  • In the real world, marginal costs are not zero. And so friction is a desirable element in helping to get to the optimal state. You want people to pause before making decisions.
  • described friction this summer as: “anything that inhibits user action within a digital interface, particularly anything that requires an additional click or screen.” For much of my time in the technology sector, friction was almost always seen as the enemy, a force to be vanquished. A “frictionless” experience was generally held up as the ideal state, the optimal product state.
  • Trump was riding the ultimate frictionless optimized engagement Twitter experience: he rode it all the way to the presidency, and then he crashed the presidency into the ground.
  • From a metrics and user point of view, the abstract notion of the President himself tweeting was exactly what Twitter wanted in its original platonic ideal. Twitter has been built to incentivize someone like Trump to engage and post
  • The other day we talked a little bit about how fighting disinformation, extremism, and online cults is like fighting a virus: There is no “cure.” Instead, what you have to do is create enough friction that the rate of spread becomes slow.
  • Our challenge is that when human and digital design comes into conflict, the artificial constraints we impose should be on the digital world to become more in service to us. Instead, we’ve let the digital world do as it will and tried to reconcile ourselves to the havoc it wreaks.
  • And one of the lessons of the last four years is that when you prize the digital design imperatives—lack of friction—over the human design imperatives—a need for friction—then bad things can happen.
  • We have an ongoing conflict between the design precepts of humans and the design precepts of computers.
  • Anyone who works with computers learns to fear their capacity to forget. Like so many things with computers, memory is strictly binary. There is either perfect recall or total oblivion, with nothing in between. It doesn't matter how important or trivial the information is. The computer can forget anything in an instant. If it remembers, it remembers for keeps.
  • This doesn't map well onto human experience of memory, which is fuzzy. We don't remember anything with perfect fidelity, but we're also not at risk of waking up having forgotten our own name. Memories tend to fade with time, and we remember only the more salient events.
  • And because we live in a time when storage grows ever cheaper, we learn to save everything, log everything, and keep it forever. You never know what will come in useful. Deleting is dangerous.
  • Our lives have become split between two worlds with two very different norms around memory.
  • [A] lot of what's wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.
  • The digital world is designed to never forget anything. It has perfect memory. Forever. So that one time you made a crude joke 20 years ago? It can now ruin your life.
  • Memory in the carbon-based world is imperfect. People forget things. That can be annoying if you’re looking for your keys but helpful if you’re trying to broker peace between two cultures. Or simply become a better person than you were 20 years ago.
  • The digital and carbon-based worlds have different design parameters. Marginal cost is one of them. Memory is another.
  • 2. Forget Me Now
  • 1. Fix Tech, Fix America
caelengrubb

Does Language Influence Culture? - WSJ - 0 views

  • These questions touch on all the major controversies in the study of mind, with important implications for politics, law and religion.
  • The idea that language might shape thought was for a long time considered untestable at best and more often simply crazy and wrong. Now, a flurry of new cognitive science research is showing that in fact, language does profoundly influence how we see the world.
  • Dr. Chomsky proposed that there is a universal grammar for all human languages—essentially, that languages don't really differ from one another in significant ways. And because languages didn't differ from one another, the theory went, it made no sense to ask whether linguistic differences led to differences in thinking.
  • ...10 more annotations...
  • The search for linguistic universals yielded interesting data on languages, but after decades of work, not a single proposed universal has withstood scrutiny. Instead, as linguists probed deeper into the world's languages (7,000 or so, only a fraction of them analyzed), innumerable unpredictable differences emerged.
  • In the past decade, cognitive scientists have begun to measure not just how people talk, but also how they think, asking whether our understanding of even such fundamental domains of experience as space, time and causality could be constructed by language.
  • About a third of the world's languages (spoken in all kinds of physical environments) rely on absolute directions for space.
  • As a result of this constant linguistic training, speakers of such languages are remarkably good at staying oriented and keeping track of where they are, even in unfamiliar landscapes.
  • People rely on their spatial knowledge to build many other more complex or abstract representations including time, number, musical pitch, kinship relations, morality and emotions.
  • And many other ways to organize time exist in the world's languages. In Mandarin, the future can be below and the past above. In Aymara, spoken in South America, the future is behind and the past in front.
  • Beyond space, time and causality, patterns in language have been shown to shape many other domains of thought. Russian speakers, who make an extra distinction between light and dark blues in their language, are better able to visually discriminate shades of blue.
  • Patterns in language offer a window on a culture's dispositions and priorities.
  • Languages, of course, are human creations, tools we invent and hone to suit our needs
  • Simply showing that speakers of different languages think differently doesn't tell us whether it's language that shapes thought or the other way around. To demonstrate the causal role of language, what's needed are studies that directly manipulate language and look for effects in cognition.
caelengrubb

Your Ability to Can Even: A Defense of Internet Linguistics - The Toast - 0 views

  • When the new grammatical structures and phrases express something that conventional language simply cannot
  • this new grammar-bending, punctuation-erasing, verb-into-noun-turning, key-board-smashing linguistic convention doesn’t dominate the whole Interne
  • language generated on Tumblr is is now becoming Facebook and Twitter language and influencing language everywhere from Buzzfeed to Autostraddle.
  • ...10 more annotations...
  • The linguistic study of the Internet is a very young field but it does, in fact, exis
  • Conventional wisdom portrays this form of linguistic flexibility and playfulness as the end of intelligent human life. The Internet has been blamed for making children illiterate, making adults stupid and generally tarnishing the state of modern discourse.
  • Not only are these allegations not true. David Crystal’s research actually points to the opposite.
  • Men and women on the Internet use many of the same tropes, enthusiasm markers and emphasizers in order to communicate. In the world of blogging and Internet writing, women are the creators of language
  • The backlash confirms the emergence of Internet Language as a fairly serious development, if not a very small and vibrant written dialect
  • Dialects are characterized as deviations from the “standard” version of a given language and are often dismissed due to their lack prestige by standard users of the language
  • The fact is, the type of language that is being created online is affecting day-to-day speech patterns and writing styles of most young adults
  • Dialects develop when people with a distinct cultural and linguistic heritage run up against a rigid and unfamiliar system, usually by immigrating to a new country. It becomes necessary to develop a way to retain old linguistic features while adopting new ones in order to able to communicate.
  • Those who use technology read more on a day-to-day basis than non-tech users and are, therefore, faster and better readers.
  • But the Internet Language phenomenon is just as much sociological as it is sociolinguistic: we are just as shaped by language as it is shaped by us.
caelengrubb

How the Language We Speak Affects the Way We Think | Psychology Today - 0 views

  • The story begins with the first American linguists who described (scientifically) some of the languages spoken by Native Americans. They discovered many awkward differences compared to the languages they had learned in school (ancient Greek, Latin, English, German, and the like).
  • They found sounds never heard in European languages (like ejective consonants), strange meanings encoded in the grammar (like parts of the verb referring to shapes of the objects), or new grammatical categories (like evidentiality, that is, the source of knowledge about the facts in a sentence).
  • Not surprisingly, some of these linguists concluded that such strange linguistic systems should have an effect on the mind of their speakers
  • ...10 more annotations...
  • Edward Sapir, one of the most influential American linguists, wrote: “The worlds in which different societies live are distinct worlds, not merely the same worlds with different labels attached” (Sapir, 1949: 162).
  • Now it was suggested that the world might be perceived differently by people speaking different languages.
  • This effect of framing or filtering is the main effect we can expect—regarding language—from perception and thought. Languages do not limit our ability to perceive the world or to think about the world, but they focus our perception, attention, and thought on specific aspects of the world.
  • Chinese-speaking children learn to count earlier than English-speaking children because Chinese numbers are more regular and transparent than English numbers (in Chinese, "eleven" is "ten one").
  • So, different languages focus the attention of their speakers on different aspects of the environment—either physical or cultural.
  • We linguists say that these salient aspects are either lexicalized or grammaticalised. Lexicalizing means that you have words for concepts, which work as shorthands for those concepts. This is useful because you don't need to explain (or paraphrase) the meaning you want to convey.
  • The lexicon is like a big, open bag: Some words are coined or borrowed because you need them for referring to new objects, and they are put into the bag. Conversely, some objects are not used anymore, and then the words for them are removed from the bag.
  • Dyirbal, a language spoken in Northern Australia, for example, has four noun classes (like English genders).
  • This grammatical classification of nouns involves a coherent view of the world, including an original mythology.
  • In summary, language functions as a filter of perception, memory, and attention. Whenever we construct or interpret a linguistic statement, we need to focus on specific aspects of the situation that the statement describes
caelengrubb

Our Language Affects What We See - Scientific American - 0 views

  • Does the language you speak influence how you think? This is the question behind the famous linguistic relativity hypothesis, that the grammar or vocabulary of a language imposes on its speakers a particular way of thinking about the world. 
  • The strongest form of the hypothesis is that language determines thought
  • A weak form is now thought to be obviously true, which is that if one language has a specific vocabulary item for a concept but another language does not, then speaking about the concept may happen more frequently or more easily.
  • ...6 more annotations...
  • Scholars are now interested in whether having a vocabulary item for a concept influences thought in domains far from language, such as visual perception.
  • In the journal Psychological Science,  Martin Maier and Rasha Abdel Rahman investigated whether the color distinction in the Russian blues would help the brain become consciously aware of a stimulus which might otherwise go unnoticed.
  • The task selected to investigate this is the "attentional blink." This is an experimental paradigm frequently used to test whether a stimuli is consciously noticed.
  • The current study is an important advance in documenting how linguistic categories influence perception. Consider how this updates the original Russian blues study, in which observers pressed a button to indicate whether two shades of blue were the same or different
  • In that study, it seems likely that observers silently labeled colors in order to make fast decisions. It is less likely that labeling was used during the attentional blink task, because paying attention to color is not required and indeed was irrelevant to the task.
  •  The current finding indicates that linguistic knowledge can influence perception, contradicting the traditional view that perception is processed independently from other aspects of cognition, including language.
katedriscoll

Does Language Influence our View of the World? | TOKTalk.net - 2 views

  • According to the Sapir-Whorf-Hypothesis (also known as linguistic relativity) language does not only reflect our way of thinking, but is also able to shape it. This hypothesis became known in the 1950s. People from different cultures and languages view the world differently and organize their reality differently. The way that they think is influenced by the grammar and vocabulary of their language
  • ? In the Arapaho culture, for example, there is only one word for “father” and for “uncle” (2). Does this now mean that a child of this culture does not differentiate between his/her own father and the uncle? I (personally) do not think so, but this is something that the anthropologists have to answer. As so often, I think that the answer is somewhere in between. There are certainly many concepts that depend very strongly on language
  •  
    This article discusses how language can affect different things, such as space and time organization and colour perception. 
  •  
    This article discusses how language can affect different things, such as space and time organization and colour perception. 
Javier E

The Dictionary Is Telling People How to Speak Again - The Atlantic - 1 views

  • print dictionaries have embodied certain ideas about democracy and capitalism that seem especially American—specifically, the notion that “good” English can be packaged and sold, becoming accessible to anyone willing to work hard enough to learn it.
  • Massive social changes in the 1960s accompanied the appearance Webster’s Third, and a new era arose for dictionaries: one in which describing how people use language became more important than showing them how to do so properly. But that era might finally be coming to an end, thanks to the internet, the decline of print dictionaries, and the political consequences of an anything-goes approach to language.
  • The standard way of describing these two approaches in lexicography is to call them “descriptivist” and “prescriptivist.” Descriptivist lexicographers, steeped in linguistic theory, eschew value judgements about so-called correct English and instead describe how people are using the language. Prescriptivists, by contrast, inform readers which usage is “right” and which is “wrong.”
  • ...11 more annotations...
  • Many American readers, though, didn’t want a non-hierarchical assessment of their language. They wanted to know which usages were “correct,” because being able to rely on a dictionary to tell you how to sound educated and upper class made becoming upper class seem as if it might be possible. That’s why the public responded badly to Webster’s latest: They craved guidance and rules.
  • Webster’s Third so unnerved critics and customers because the American idea of social mobility is limited, provisional, and full of paradoxes
  • There’s no such thing as social mobility if everyone can enjoy it. To be allowed to move around within a hierarchy implies that the hierarchy must be left largely intact. But in America, people have generally accepted the idea of inherited upper-class status, while seeing upward social mobility as something that must be earned.
  • In a 2001 Harper’s essay about the Webster’s Third controversy, David Foster Wallace called the publication of the dictionary “the Fort Sumter of the contemporary usage wars.”
  • for decades after the publication of Webster’s Third, people still had intense opinions about dictionaries. In the 1990s, an elderly copy editor once told me, with considerable vehemence, that Merriam-Webster’s Dictionaries were “garbage.” She would only use Houghton Mifflin’s American Heritage Dictionary, which boasted a Usage Panel of experts to advise readers about the finer points of English grammar
  • what descriptivists do: They describe rather than judge. Nowadays, this approach to dictionary making is generally not contested or even really discussed.
  • In his 2009 book Going Nucular, Geoffrey Nunberg observes that we now live in a culture in which there are no clear distinctions between highbrow, middlebrow, and lowbrow culture. It stands to reason that in a society in which speaking in a recognizably “highbrow” way confers no benefits, dictionaries will likely matter less
  • If American Heritage was aggressively branding itself in the 1960s, Merriam-Webster is doing the same now.
  • The company has a feisty blog and Twitter feed that it uses to criticize linguistic and grammatical choices. President Trump and his administration are regular catalysts for social-media clarifications by Merriam-Webster. The company seems bothered when Trump and his associates change the meanings of words for their own convenience, or when they debase the language more generally.
  • it seems that the way the company has regained its relevance in the post-print era is by having a strong opinions about how people should use English.
  • It may be that in spite of Webster’s Third’s noble intentions, language may just be too human a thing to be treated in an entirely detached, scientific way. Indeed, I’m not sure I want to live in a society in which citizens can’t call out government leaders when they start subverting language in distressing ways.
Javier E

MacIntyre | Internet Encyclopedia of Philosophy - 0 views

  • For MacIntyre, “rationality” comprises all the intellectual resources, both formal and substantive, that we use to judge truth and falsity in propositions, and to determine choice-worthiness in courses of action
  • Rationality in this sense is not universal; it differs from community to community and from person to person, and may both develop and regress over the course of a person’s life or a community’s history.
  • So rationality itself, whether theoretical or practical, is a concept with a history: indeed, since there are also a diversity of traditions of enquiry, with histories, there are, so it will turn out, rationalities rather than rationality, just as it will also turn out that there are justices rather than justice
  • ...164 more annotations...
  • Rationality is the collection of theories, beliefs, principles, and facts that the human subject uses to judge the world, and a person’s rationality is, to a large extent, the product of that person’s education and moral formation.
  • To the extent that a person accepts what is handed down from the moral and intellectual traditions of her or his community in learning to judge truth and falsity, good and evil, that person’s rationality is “tradition-constituted.” Tradition-constituted rationality provides the schemata by which we interpret, understand, and judge the world we live in
  • The apparent problem of relativism in MacIntyre’s theory of rationality is much like the problem of relativism in the philosophy of science. Scientific claims develop within larger theoretical frameworks, so that the apparent truth of a scientific claim depends on one’s judgment of the larger framework. The resolution of the problem of relativism therefore appears to hang on the possibility of judging frameworks or rationalities, or judging between frameworks or rationalities from a position that does not presuppose the truth of the framework or rationality, but no such theoretical standpoint is humanly possible.
  • MacIntyre finds that the world itself provides the criterion for the testing of rationalities, and he finds that there is no criterion except the world itself that can stand as the measure of the truth of any philosophical theory.
  • MacIntyre’s philosophy is indebted to the philosophy of science, which recognizes the historicism of scientific enquiry even as it seeks a truthful understanding of the world. MacIntyre’s philosophy does not offer a priori certainty about any theory or principle; it examines the ways in which reflection upon experience supports, challenges, or falsifies theories that have appeared to be the best theories so far to the people who have accepted them so far. MacIntyre’s ideal enquirers remain Hamlets, not Emmas.
  • history shows us that individuals, communities, and even whole nations may commit themselves militantly over long periods of their histories to doctrines that their ideological adversaries find irrational. This qualified relativism of appearances has troublesome implications for anyone who believes that philosophical enquiry can easily provide certain knowledge of the world
  • According to MacIntyre, theories govern the ways that we interpret the world and no theory is ever more than “the best standards so far” (3RV, p. 65). Our theories always remain open to improvement, and when our theories change, the appearances of our world—the apparent truths of claims judged within those theoretical frameworks—change with them.
  • From the subjective standpoint of the human enquirer, MacIntyre finds that theories, concepts, and facts all have histories, and they are all liable to change—for better or for worse.
  • MacIntyre holds that the rationality of individuals is not only tradition-constituted, it is also tradition constitutive, as individuals make their own contributions to their own rationality, and to the rationalities of their communities. Rationality is not fixed, within either the history of a community or the life of a person
  • The modern account of first principles justifies an approach to philosophy that rejects tradition. The modern liberal individualist approach is anti-traditional. It denies that our understanding is tradition-constituted and it denies that different cultures may differ in their standards of rationality and justice:
  • Modernity does not see tradition as the key that unlocks moral and political understanding, but as a superfluous accumulation of opinions that tend to prejudice moral and political reasoning.
  • Although modernity rejects tradition as a method of moral and political enquiry, MacIntyre finds that it nevertheless bears all the characteristics of a moral and political tradition.
  • If historical narratives are only projections of the interests of historians, then it is difficult to see how this historical narrative can claim to be truthful
  • For these post-modern theorists, “if the Enlightenment conceptions of truth and rationality cannot be sustained,” either relativism or perspectivism “is the only possible alternative” (p. 353). MacIntyre rejects both challenges by developing his theory of tradition-constituted and tradition-constitutive rationality on pp. 354-369
  • How, then, is one to settle challenges between two traditions? It depends on whether the adherents of either take the challenges of the other tradition seriously. It depends on whether the adherents of either tradition, on seeing a failure in their own tradition are willing to consider an answer offered by their rival (p. 355)
  • how a person with no traditional affiliation is to deal with the conflicting claims of rival traditions: “The initial answer is: that will depend upon who you are and how you understand yourself. This is not the kind of answer which we have been educated to expect in philosophy”
  • MacIntyre focuses the critique of modernity on the question of rational justification. Modern epistemology stands or falls on the possibility of Cartesian epistemological first principles. MacIntyre’s history exposes that notion of first principle as a fiction, and at the same time demonstrates that rational enquiry advances (or declines) only through tradition
  • MacIntyre cites Foucault’s 1966 book, Les Mots et les choses (The Order of Things, 1970) as an example of the self-subverting character of Genealogical enquiry
  • Foucault’s book reduces history to a procession of “incommensurable ordered schemes of classification and representation” none of which has any greater claim to truth than any other, yet this book “is itself organized as a scheme of classification and representation.”
  • From MacIntyre’s perspective, there is no question of deciding whether or not to work within a tradition; everyone who struggles with practical, moral, and political questions simply does. “There is no standing ground, no place for enquiry . . . apart from that which is provided by some particular tradition or other”
  • Three Rival Versions of Moral Enquiry (1990). The central idea of the Gifford Lectures is that philosophers make progress by addressing the shortcomings of traditional narratives about the world, shortcomings that become visible either through the failure of traditional narratives to make sense of experience, or through the introduction of contradictory narratives that prove impossible to dismiss
  • MacIntyre compares three traditions exemplified by three literary works published near the end of Adam Gifford’s life (1820–1887)
  • The Ninth Edition of the Encyclopaedia Britannica (1875–1889) represents the modern tradition of trying to understand the world objectively without the influence of tradition.
  • The Genealogy of Morals (1887), by Friedrich Nietzsche embodies the post-modern tradition of interpreting all traditions as arbitrary impositions of power.
  • The encyclical letter Aeterni Patris (1879) of Pope Leo XIII exemplifies the approach of acknowledging one’s predecessors within one’s own tradition of enquiry and working to advance or improve that tradition in the pursuit of objective truth. 
  • Of the three versions of moral enquiry treated in 3RV, only tradition, exemplified in 3RV by the Aristotelian, Thomistic tradition, understands itself as a tradition that looks backward to predecessors in order to understand present questions and move forward
  • Encyclopaedia obscures the role of tradition by presenting the most current conclusions and convictions of a tradition as if they had no history, and as if they represented the final discovery of unalterable truth
  • Encyclopaedists focus on the present and ignore the past.
  • Genealogists, on the other hand, focus on the past in order to undermine the claims of the present.
  • In short, Genealogy denies the teleology of human enquiry by denying (1) that historical enquiry has been fruitful, (2) that the enquiring person has a real identity, and (3) that enquiry has a real goal. MacIntyre finds this mode of enquiry incoherent.
  • Genealogy is self-deceiving insofar as it ignores the traditional and teleological character of its enquiry.
  • Genealogical moral enquiry must make similar exceptions to its treatments of the unity of the enquiring subject and the teleology of moral enquiry; thus “it seems to be the case that the intelligibility of genealogy requires beliefs and allegiances of a kind precluded by the genealogical stance” (3RV, p. 54-55)
  • MacIntyre uses Thomism because it applies the traditional mode of enquiry in a self-conscious manner. Thomistic students learn the work of philosophical enquiry as apprentices in a craft (3RV, p. 61), and maintain the principles of the tradition in their work to extend the understanding of the tradition, even as they remain open to the criticism of those principles.
  • 3RV uses Thomism as its example of tradition, but this use should not suggest that MacIntyre identifies “tradition” with Thomism or Thomism-as-a-name-for-the-Western-tradition. As noted above, WJWR distinguished four traditions of enquiry within the Western European world alone
  • MacIntyre’s emphasis on the temporality of rationality in traditional enquiry makes tradition incompatible with the epistemological projects of modern philosophy
  • Tradition is not merely conservative; it remains open to improvement,
  • Tradition differs from both encyclopaedia and genealogy in the way it understands the place of its theories in the history of human enquiry. The adherent of a tradition must understand that “the rationality of a craft is justified by its history so far,” thus it “is inseparable from the tradition through which it was achieved”
  • MacIntyre uses Thomas Aquinas to illustrate the revolutionary potential of traditional enquiry. Thomas was educated in Augustinian theology and Aristotelian philosophy, and through this education he began to see not only the contradictions between the two traditions, but also the strengths and weaknesses that each tradition revealed in the other. His education also helped him to discover a host of questions and problems that had to be answered and solved. Many of Thomas Aquinas’ responses to these concerns took the form of disputed questions. “Yet to each question the answer produced by Aquinas as a conclusion is no more than and, given Aquinas’s method, cannot but be no more than, the best answer reached so far. And hence derives the essential incompleteness”
  • argue that the virtues are essential to the practice of independent practical reason. The book is relentlessly practical; its arguments appeal only to experience and to purposes, and to the logic of practical reasoning.
  • Like other intelligent animals, human beings enter life vulnerable, weak, untrained, and unknowing, and face the likelihood of infirmity in sickness and in old age. Like other social animals, humans flourish in groups. We learn to regulate our passions, and to act effectively alone and in concert with others through an education provided within a community. MacIntyre’s position allows him to look to the animal world to find analogies to the role of social relationships in the moral formation of human beings
  • The task for the human child is to make “the transition from the infantile exercise of animal intelligence to the exercise of independent practical reasoning” (DRA, p. 87). For a child to make this transition is “to redirect and transform her or his desires, and subsequently to direct them consistently towards the goods of different stages of her or his life” (DRA, p. 87). The development of independent practical reason in the human agent requires the moral virtues in at least three ways.
  • DRA presents moral knowledge as a “knowing how,” rather than as a “knowing that.” Knowledge of moral rules is not sufficient for a moral life; prudence is required to enable the agent to apply the rules well.
  • “Knowing how to act virtuously always involves more than rule-following” (DRA, p. 93). The prudent person can judge what must be done in the absence of a rule and can also judge when general norms cannot be applied to particular cases.
  • Flourishing as an independent practical reasoner requires the virtues in a second way, simply because sometimes we need our friends to tell us who we really are. Independent practical reasoning also requires self-knowledge, but self-knowledge is impossible without the input of others whose judgment provides a reliable touchstone to test our beliefs about ourselves. Self-knowledge therefore requires the virtues that enable an agent to sustain formative relationships and to accept the criticism of trusted friends
  • Human flourishing requires the virtues in a third way, by making it possible to participate in social and political action. They enable us to “protect ourselves and others against neglect, defective sympathies, stupidity, acquisitiveness, and malice” (DRA, p. 98) by enabling us to form and sustain social relationships through which we may care for one another in our infirmities, and pursue common goods with and for the other members of our societies.
  • MacIntyre argues that it is impossible to find an external standpoint, because rational enquiry is an essentially social work (DRA, p. 156-7). Because it is social, shared rational enquiry requires moral commitment to, and practice of, the virtues to prevent the more complacent members of communities from closing off critical reflection upon “shared politically effective beliefs and concepts”
  • MacIntyre finds himself compelled to answer what may be called the question of moral provincialism: If one is to seek the truth about morality and justice, it seems necessary to “find a standpoint that is sufficiently external to the evaluative attitudes and practices that are to be put to the question.” If it is impossible for the agent to take such an external standpoint, if the agent’s commitments preclude radical criticism of the virtues of the community, does that leave the agent “a prisoner of shared prejudices” (DRA, p. 154)?
  • The book moves from MacIntyre’s assessment of human needs for the virtues to the political implications of that assessment. Social and political institutions that form and enable independent practical reasoning must “satisfy three conditions.” (1) They must enable their members to participate in shared deliberations about the communities’ actions. (2) They must establish norms of justice “consistent with exercise of” the virtue of justice. (3) They must enable the strong “to stand proxy” as advocates for the needs of the weak and the disabled.
  • The social and political institutions that MacIntyre recommends cannot be identified with the modern nation state or the modern nuclear family
  • The political structures necessary for human flourishing are essentially local
  • Yet local communities support human flourishing only when they actively support “the virtues of just generosity and shared deliberation”
  • MacIntyre rejects individualism and insists that we view human beings as members of communities who bear specific debts and responsibilities because of our social identities. The responsibilities one may inherit as a member of a community include debts to one’s forbearers that one can only repay to people in the present and future
  • The constructive argument of the second half of the book begins with traditional accounts of the excellences or virtues of practical reasoning and practical rationality rather than virtues of moral reasoning or morality. These traditional accounts define virtue as arête, as excellence
  • Practices are supported by institutions like chess clubs, hospitals, universities, industrial corporations, sports leagues, and political organizations.
  • Practices exist in tension with these institutions, since the institutions tend to be oriented to goods external to practices. Universities, hospitals, and scholarly societies may value prestige, profitability, or relations with political interest groups above excellence in the practices they are said to support.
  • Personal desires and institutional pressures to pursue external goods may threaten to derail practitioners’ pursuits of the goods internal to practices. MacIntyre defines virtue initially as the quality of character that enables an agent to overcome these temptations:
  • “A virtue is an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices
  • Excellence as a human agent cannot be reduced to excellence in a particular practice (See AV, pp. 204–
  • The virtues therefore are to be understood as those dispositions which will not only sustain practices and enable us to achieve the goods internal to practices, but which will also sustain us in the relevant kind of quest for the good, by enabling us to overcome the harms, dangers, temptations, and distractions which we encounter, and which will furnish us with increasing self-knowledge and increasing knowledge of the good (AV, p. 219).
  • The excellent human agent has the moral qualities to seek what is good and best both in practices and in life as a whole.
  • The virtues find their point and purpose not only in sustaining those relationships necessary if the variety of goods internal to practices are to be achieved and not only in sustaining the form of an individual life in which that individual may seek out his or her good as the good of his or her whole life, but also in sustaining those traditions which provide both practices and individual lives with their necessary historical context (AV, p. 223)
  • Since “goods, and with them the only grounds for the authority of laws and virtues, can only be discovered by entering into those relationships which constitute communities whose central bond is a shared vision of and understanding of goods” (AV, p. 258), any hope for the transformation and renewal of society depends on the development and maintenance of such communities.
  • MacIntyre’s Aristotelian approach to ethics as a study of human action distinguishes him from post-Kantian moral philosophers who approach ethics as a means of determining the demands of objective, impersonal, universal morality
  • This modern approach may be described as moral epistemology. Modern moral philosophy pretends to free the individual to determine for her- or himself what she or he must do in a given situation, irrespective of her or his own desires; it pretends to give knowledge of universal moral laws
  • Aristotelian metaphysicians, particularly Thomists who define virtue in terms of the perfection of nature, rejected MacIntyre’s contention that an adequate Aristotelian account of virtue as excellence in practical reasoning and human action need not appeal to Aristotelian metaphysic
  • one group of critics rejects MacIntyre’s Aristotelianism because they hold that any Aristotelian account of the virtues must first account for the truth about virtue in terms of Aristotle’s philosophy of nature, which MacIntyre had dismissed in AV as “metaphysical biology”
  • Many of those who rejected MacIntyre’s turn to Aristotle define “virtue” primarily along moral lines, as obedience to law or adherence to some kind of natural norm. For these critics, “virtuous” appears synonymous with “morally correct;” their resistance to MacIntyre’s appeal to virtue stems from their difficulties either with what they take to be the shortcomings of MacIntyre’s account of moral correctness or with the notion of moral correctness altogether
  • MacIntyre continues to argue from the experience of practical reasoning to the demands of moral education.
  • Descartes and his successors, by contrast, along with certain “notable Thomists of the last hundred years” (p. 175), have proposed that philosophy begins from knowledge of some “set of necessarily true first principles which any truly rational person is able to evaluate as true” (p. 175). Thus for the moderns, philosophy is a technical rather than moral endeavor
  • MacIntyre distinguishes two related challenges to his position, the “relativist challenge” and the “perspectivist challenge.” These two challenges both acknowledge that the goals of the Enlightenment cannot be met and that, “the only available standards of rationality are those made available by and within traditions” (p. 252); they conclude that nothing can be known to be true or false
  • MacIntyre follows the progress of the Western tradition through “three distinct traditions:” from Homer and Aristotle to Thomas Aquinas, from Augustine to Thomas Aquinas and from Augustine through Calvin to Hume
  • Chapter 17 examines the modern liberal denial of tradition, and the ironic transformation of liberalism into the fourth tradition to be treated in the book.
  • MacIntyre credits John Stuart Mill and Thomas Aquinas as “two philosophers of the kind who by their writing send us beyond philosophy into immediate encounter with the ends of life
  • First, both were engaged by questions about the ends of life as questioning human beings and not just as philosophers. . . .
  • Secondly, both Mill and Aquinas understood their speaking and writing as contributing to an ongoing philosophical conversation. . . .
  • Thirdly, it matters that both the end of the conversation and the good of those who participate in it is truth and that the nature of truth, of good, of rational justification, and of meaning therefore have to be central topics of that conversation (Tasks, pp. 130-1).
  • Without these three characteristics, philosophy is first reduced to “the exercise of a set of analytic and argumentative skills. . . . Secondly, philosophy may thereby become a diversion from asking questions about the ends of life with any seriousness”
  • Neither Rosenzweig nor Lukács made philosophical progress because both failed to relate “their questions about the ends of life to the ends of their philosophical writing”
  • First, any adequate philosophical history or biography must determine whether the authors studied remain engaged with the questions that philosophy studies, or set the questions aside in favor of the answers. Second, any adequate philosophical history or biography must determine whether the authors studied insulated themselves from contact with conflicting worldviews or remained open to learning from every available philosophical approach. Third, any adequate philosophical history or biography must place the authors studied into a broader context that shows what traditions they come from and “whose projects” they are “carrying forward
  • MacIntyre’s recognition of the connection between an author’s pursuit of the ends of life and the same author’s work as a philosophical writer prompts him to finish the essay by demanding three things of philosophical historians and biographers
  • Philosophy is not just a study; it is a practice. Excellence in this practice demands that an author bring her or his struggles with the questions of the ends of philosophy into dialogue with historic and contemporary texts and authors in the hope of making progress in answering those questions
  • MacIntyre defends Thomistic realism as rational enquiry directed to the discovery of truth.
  • The three Thomistic essays in this book challenge those caricatures by presenting Thomism in a way that people outside of contemporary Thomistic scholarship may find surprisingly flexible and open
  • To be a moral agent, (1) one must understand one’s individual identity as transcending all the roles that one fills; (2) one must see oneself as a practically rational individual who can judge and reject unjust social standards; and (3) one must understand oneself as “as accountable to others in respect of the human virtues and not just in respect of [one’s] role-performances
  • J is guilty because he complacently accepted social structures that he should have questioned, structures that undermined his moral agency. This essay shows that MacIntyre’s ethics of human agency is not just a descriptive narrative about the manner of moral education; it is a standard laden account of the demands of moral agency.
  • MacIntyre considers “the case of J” (J, for jemand, the German word for “someone”), a train controller who learned, as a standard for his social role, to take no interest in what his trains carried, even during war time when they carried “munitions and . . . Jews on their way to extermination camps”
  • J had learned to do his work for the railroad according to one set of standards and to live other parts of his life according to other standards, so that this compliant participant in “the final solution” could contend, “You cannot charge me with moral failure” (E&P, p. 187).
  • The epistemological theories of Modern moral philosophy were supposed to provide rational justification for rules, policies, and practical determinations according to abstract universal standards, but MacIntyre has dismissed those theorie
  • Modern metaethics is supposed to enable its practitioners to step away from the conflicting demands of contending moral traditions and to judge those conflicts from a neutral position, but MacIntyre has rejected this project as well
  • In his ethical writings, MacIntyre seeks only to understand how to liberate the human agent from blindness and stupidity, to prepare the human agent to recognize what is good and best to do in the concrete circumstances of that agent’s own life, and to strengthen the agent to follow through on that judgment.
  • In his political writings, MacIntyre investigates the role of communities in the formation of effective rational agents, and the impact of political institutions on the lives of communities. This kind of ethics and politics is appropriately named the ethics of human agency.
  • The purpose of the modern moral philosophy of authors like Kant and Mill was to determine, rationally and universally, what kinds of behavior ought to be performed—not in terms of the agent’s desires or goals, but in terms of universal, rational duties. Those theories purported to let agents know what they ought to do by providing knowledge of duties and obligations, thus they could be described as theories of moral epistemology.
  • Contemporary virtue ethics purports to let agents know what qualities human beings ought to have, and the reasons that we ought to have them, not in terms of our fitness for human agency, but in the same universal, disinterested, non-teleological terms that it inherits from Kant and Mill.
  • For MacIntyre, moral knowledge remains a “knowing how” rather than a “knowing that;” MacIntyre seeks to identify those moral and intellectual excellences that make human beings more effective in our pursuit of the human good.
  • MacIntyre’s purpose in his ethics of human agency is to consider what it means to seek one’s good, what it takes to pursue one’s good, and what kind of a person one must become if one wants to pursue that good effectively as a human agent.
  • As a philosophy of human agency, MacIntyre’s work belongs to the traditions of Aristotle and Thomas Aquinas.
  • in keeping with the insight of Marx’s third thesis on Feuerbach, it maintained the common condition of theorists and people as peers in the pursuit of the good life.
  • He holds that the human good plays a role in our practical reasoning whether we recognize it or not, so that some people may do well without understanding why (E&P, p. 25). He also reads Aristotle as teaching that knowledge of the good can make us better agents
  • AV defines virtue in terms of the practical requirements for excellence in human agency, in an agent’s participation in practices (AV, ch. 14), in an agent’s whole life, and in an agent’s involvement in the life of her or his community
  • MacIntyre’s Aristotelian concept of “human action” opposes the notion of “human behavior” that prevailed among mid-twentieth-century determinist social scientists. Human actions, as MacIntyre understands them, are acts freely chosen by human agents in order to accomplish goals that those agents pursue
  • Human behavior, according to mid-twentieth-century determinist social scientists, is the outward activity of a subject, which is said to be caused entirely by environmental influences beyond the control of the subject.
  • Rejecting crude determinism in social science, and approaches to government and public policy rooted in determinism, MacIntyre sees the renewal of human agency and the liberation of the human agent as central goals for ethics and politics.
  • MacIntyre’s Aristotelian account of “human action” examines the habits that an agent must develop in order to judge and act most effectively in the pursuit of truly choice-worthy ends
  • MacIntyre seeks to understand what it takes for the human person to become the kind of agent who has the practical wisdom to recognize what is good and best to do and the moral freedom to act on her or his best judgment.
  • MacIntyre rejected the determinism of modern social science early in his career (“Determinism,” 1957), yet he recognizes that the ability to judge well and act freely is not simply given; excellence in judgment and action must be developed, and it is the task of moral philosophy to discover how these excellences or virtues of the human agent are established, maintained, and strengthened
  • MacIntyre’s Aristotelian philosophy investigates the conditions that support free and deliberate human action in order to propose a path to the liberation of the human agent through participation in the life of a political community that seeks its common goods through the shared deliberation and action of its members
  • As a classics major at Queen Mary College in the University of London (1945-1949), MacIntyre read the Greek texts of Plato and Aristotle, but his studies were not limited to the grammars of ancient languages. He also examined the ethical theories of Immanuel Kant and John Stuart Mill. He attended the lectures of analytic philosopher A. J. Ayer and of philosopher of science Karl Popper. He read Ludwig Wittgenstein’s Tractatus Logico Philosophicus, Jean-Paul Sartre’s L'existentialisme est un humanisme, and Marx’s Eighteenth Brumaire of Napoleon Bonaparte (What happened, pp. 17-18). MacIntyre met the sociologist Franz Steiner, who helped direct him toward approaching moralities substantively
  • Alasdair MacIntyre’s philosophy builds on an unusual foundation. His early life was shaped by two conflicting systems of values. One was “a Gaelic oral culture of farmers and fishermen, poets and storytellers.” The other was modernity, “The modern world was a culture of theories rather than stories” (MacIntyre Reader, p. 255). MacIntyre embraced both value systems
  • From Marxism, MacIntyre learned to see liberalism as a destructive ideology that undermines communities in the name of individual liberty and consequently undermines the moral formation of human agents
  • For MacIntyre, Marx’s way of seeing through the empty justifications of arbitrary choices to consider the real goals and consequences of political actions in economic and social terms would remain the principal insight of Marxism
  • After his retirement from teaching, MacIntyre has continued his work of promoting a renewal of human agency through an examination of the virtues demanded by practices, integrated human lives, and responsible engagement with community life. He is currently affiliated with the Centre for Contemporary Aristotelian Studies in Ethics and Politics (CASEP) at London Metropolitan University.
  • The second half of AV proposes a conception of practice and practical reasoning and the notion of excellence as a human agent as an alternative to modern moral philosophy
  • AV rejects the view of “modern liberal individualism” in which autonomous individuals use abstract moral principles to determine what they ought to do. The critique of modern normative ethics in the first half of AV rejects modern moral reasoning for its failure to justify its premises, and criticizes the frequent use of the rhetoric of objective morality and scientific necessity to manipulate people to accept arbitrary decisions
  • MacIntyre uses “modern liberal individualism” to name a much broader category that includes both liberals and conservatives in contemporary American political parlance, as well as some Marxists and anarchists (See ASIA, pp. 280-284). Conservatism, liberalism, Marxism, and anarchism all present the autonomous individual as the unit of civil society
  • The sources of modern liberal individualism—Hobbes, Locke, and Rousseau—assert that human life is solitary by nature and social by habituation and convention. MacIntyre’s Aristotelian tradition holds, on the contrary, that human life is social by nature.
  • MacIntyre identifies moral excellence with effective human agency, and seeks a political environment that will help to liberate human agents to recognize and seek their own goods, as components of the common goods of their communities, more effectively. For MacIntyre therefore, ethics and politics are bound together.
  • For MacIntyre ethics is not an application of principles to facts, but a study of moral action. Moral action, free human action, involves decisions to do things in pursuit of goals, and it involves the understanding of the implications of one’s actions for the whole variety of goals that human agents seek
  • In this sense, “To act morally is to know how to act” (SMJ, p. 56). “Morality is not a ‘knowing that’ but a ‘knowing how’”
  • If human action is a ‘knowing how,’ then ethics must also consider how one learns ‘how.’ Like other forms of ‘knowing how,’ MacIntyre finds that one learns how to act morally within a community whose language and shared standards shape our judgment
  • MacIntyre had concluded that ethics is not an abstract exercise in the assessment of facts; it is a study of free human action and of the conditions that enable rational human agency.
  • MacIntyre gives Marx credit for concluding in the third of the Theses on Feuerbach, that the only way to change society is to change ourselves, and that “The coincidence of the changing of human activity or self-changing can only be comprehended and rationally understood as revolutionary practice”
  • MacIntyre distinguishes “religion which is an opiate for the people from religion which is not” (MI, p. 83). He condemns forms of religion that justify social inequities and encourage passivity. He argues that authentic Christian teaching criticizes social structures and encourages action
  • Where “moral philosophy textbooks” discuss the kinds of maxims that should guide “promise-keeping, truth-telling, and the like,” moral maxims do not guide real agents in real life at all. “They do not guide us because we do not need to be guided. We know what to do” (ASIA, p. 106). Sometimes we do this without any maxims at all, or even against all the maxims we know. MacIntyre Illustrates his point with Huckleberry Finn’s decision to help Jim, Miss Watson’s escaped slave, to make his way to freedom
  • MacIntyre develops the ideas that morality emerges from history, and that morality organizes the common life of a community
  • The book concludes that the concepts of morality are neither timeless nor ahistorical, and that understanding the historical development of ethical concepts can liberate us “from any false absolutist claims” (SHE, p. 269). Yet this conclusion need not imply that morality is essentially arbitrary or that one could achieve freedom by liberating oneself from the morality of one’s society.
  • From this “Aristotelian point of view,” “modern morality” begins to go awry when moral norms are separated from the pursuit of human goods and moral behavior is treated as an end in itself. This separation characterizes Christian divine command ethics since the fourteenth century and has remained essential to secularized modern morality since the eighteenth century
  • From MacIntyre’s “Aristotelian point of view,” the autonomy granted to the human agent by modern moral philosophy breaks down natural human communities and isolates the individual from the kinds of formative relationships that are necessary to shape the agent into an independent practical reasoner.
  • the 1977 essay “Epistemological Crises, Dramatic Narrative, and the Philosophy of Science” (Hereafter EC). This essay, MacIntyre reports, “marks a major turning-point in my thought in the 1970s” (The Tasks of Philosophy, p. vii) EC may be described fairly as MacIntyre’s discourse on method
  • First, Philosophy makes progress through the resolution of problems. These problems arise when the theories, histories, doctrines and other narratives that help us to organize our experience of the world fail us, leaving us in “epistemological crises.” Epistemological crises are the aftermath of events that undermine the ways that we interpret our world
  • it presents three general points on the method for philosophy.
  • To live in an epistemological crisis is to be aware that one does not know what one thought one knew about some particular subject and to be anxious to recover certainty about that subject.
  • To resolve an epistemological crisis it is not enough to impose some new way of interpreting our experience, we also need to understand why we were wrong before: “When an epistemological crisis is resolved, it is by the construction of a new narrative which enables the agent to understand both how he or she could intelligibly have held his or her original beliefs and how he or she could have been so drastically misled by them
  • MacIntyre notes, “Philosophers have customarily been Emmas and not Hamlets” (p. 6); that is, philosophers have treated their conclusions as accomplished truths, rather than as “more adequate narratives” (p. 7) that remain open to further improvement.
  • To illustrate his position on the open-endedness of enquiry, MacIntyre compares the title characters of Shakespeare’s Hamlet and Jane Austen’s Emma. When Emma finds that she is deeply misled in her beliefs about the other characters in her story, Mr. Knightly helps her to learn the truth and the story comes to a happy ending (p. 6). Hamlet, by contrast, finds no pat answers to his questions; rival interpretations remain throughout the play, so that directors who would stage the play have to impose their own interpretations on the script
  • Another approach to education is the method of Descartes, who begins by rejecting everything that is not clearly and distinctly true as unreliable and false in order to rebuild his understanding of the world on a foundation of undeniable truth.
  • Descartes presents himself as willfully rejecting everything he had believed, and ignores his obvious debts to the Scholastic tradition, even as he argues his case in French and Latin. For MacIntyre, seeking epistemological certainty through universal doubt as a precondition for enquiry is a mistake: “it is an invitation not to philosophy but to mental breakdown, or rather to philosophy as a means of mental breakdown.
  • MacIntyre contrasts Descartes’ descent into mythical isolation with Galileo, who was able to make progress in astronomy and physics by struggling with the apparently insoluble questions of late medieval astronomy and physics, and radically reinterpreting the issues that constituted those questions
  • To make progress in philosophy one must sort through the narratives that inform one’s understanding, struggle with the questions that those narratives raise, and on occasion, reject, replace, or reinterpret portions of those narratives and propose those changes to the rest of one’s community for assessment. Human enquiry is always situated within the history and life of a community.
  • The third point of EC is that we can learn about progress in philosophy from the philosophy of science
  • Kuhn’s “paradigm shifts,” however, are unlike MacIntyre’s resolutions of epistemological crises in two ways.
  • First they are not rational responses to specific problems. Kuhn compares paradigm shifts to religious conversions (pp. 150, 151, 158), stressing that they are not guided by rational norms and he claims that the “mopping up” phase of a paradigm shift is a matter of convention in the training of new scientists and attrition among the holdouts of the previous paradigm
  • Second, the new paradigm is treated as a closed system of belief that regulates a new period of “normal science”; Kuhn’s revolutionary scientists are Emmas, not Hamlets
  • MacIntyre proposes elements of Imre Lakatos’ philosophy of science as correctives to Kuhn’s. While Lakatos has his own shortcomings, his general account of the methodologies of scientific research programs recognizes the role of reason in the transitions between theories and between research programs (Lakatos’ analog to Kuhn’s paradigms or disciplinary matrices). Lakatos presents science as an open ended enquiry, in which every theory may eventually be replaced by more adequate theories. For Lakatos, unlike Kuhn, rational scientific progress occurs when a new theory can account both for the apparent promise and for the actual failure of the theory it replaces.
  • The third conclusion of MacIntyre’s essay is that decisions to support some theories over others may be justified rationally to the extent that those theories allow us to understand our experience and our history, including the history of the failures of inadequate theories
  • For Aristotle, moral philosophy is a study of practical reasoning, and the excellences or virtues that Aristotle recommends in the Nicomachean Ethics are the intellectual and moral excellences that make a moral agent effective as an independent practical reasoner.
  • MacIntyre also finds that the contending parties have little interest in the rational justification of the principles they use. The language of moral philosophy has become a kind of moral rhetoric to be used to manipulate others in defense of the arbitrary choices of its users
  • examining the current condition of secular moral and political discourse. MacIntyre finds contending parties defending their decisions by appealing to abstract moral principles, but he finds their appeals eclectic, inconsistent, and incoherent.
  • The secular moral philosophers of the eighteenth and nineteenth centuries shared strong and extensive agreements about the content of morality (AV, p. 51) and believed that their moral philosophy could justify the demands of their morality rationally, free from religious authority.
  • MacIntyre traces the lineage of the culture of emotivism to the secularized Protestant cultures of northern Europe
  • Modern moral philosophy had thus set for itself an incoherent goal. It was to vindicate both the moral autonomy of the individual and the objectivity, necessity, and categorical character of the rules of morality
  • MacIntyre turns to an apparent alternative, the pragmatic expertise of professional managers. Managers are expected to appeal to the facts to make their decisions on the objective basis of effectiveness, and their authority to do this is based on their knowledge of the social sciences
  • An examination of the social sciences reveals, however, that many of the facts to which managers appeal depend on sociological theories that lack scientific status. Thus, the predictions and demands of bureaucratic managers are no less liable to ideological manipulation than the determinations of modern moral philosophers.
  • Modern moral philosophy separates moral reasoning about duties and obligations from practical reasoning about ends and practical deliberation about the means to one’s ends, and in doing so it separates morality from practice.
  • Many Europeans also lost the practical justifications for their moral norms as they approached modernity; for these Europeans, claiming that certain practices are “immoral,” and invoking Kant’s categorical imperative or Mill’s principle of utility to explain why those practices are immoral, seems no more adequate than the Polynesian appeal to taboo.
  • MacIntyre sifts these definitions and then gives his own definition of virtue, as excellence in human agency, in terms of practices, whole human lives, and traditions in chapters 14 and 15 of AV.
  • In the most often quoted sentence of AV, MacIntyre defines a practice as (1) a complex social activity that (2) enables participants to gain goods internal to the practice. (3) Participants achieve excellence in practices by gaining the internal goods. When participants achieve excellence, (4) the social understandings of excellence in the practice, of the goods of the practice, and of the possibility of achieving excellence in the practice “are systematically extended”
  • Practices, like chess, medicine, architecture, mechanical engineering, football, or politics, offer their practitioners a variety of goods both internal and external to these practices. The goods internal to practices include forms of understanding or physical abilities that can be acquired only by pursuing excellence in the associated practice
  • Goods external to practices include wealth, fame, prestige, and power; there are many ways to gain these external goods. They can be earned or purchased, either honestly or through deception; thus the pursuit of these external goods may conflict with the pursuit of the goods internal to practices.
  • An intelligent child is given the opportunity to win candy by learning to play chess. As long as the child plays chess only to win candy, he has every reason to cheat if by doing so he can win more candy. If the child begins to desire and pursue the goods internal to chess, however, cheating becomes irrational, because it is impossible to gain the goods internal to chess or any other practice except through an honest pursuit of excellence. Goods external to practices may nevertheless remain tempting to the practitioner.
  • Since MacIntyre finds social identity necessary for the individual, MacIntyre’s definition of the excellence or virtue of the human agent needs a social dimension:
  • These responsibilities also include debts incurred by the unjust actions of ones’ predecessors.
  • The enslavement and oppression of black Americans, the subjugation of Ireland, and the genocide of the Jews in Europe remained quite relevant to the responsibilities of citizens of the United States, England, and Germany in 1981, as they still do today.
  • Thus an American who said “I never owned any slaves,” “the Englishman who says ‘I never did any wrong to Ireland,’” or “the young German who believes that being born after 1945 means that what Nazis did to Jews has no moral relevance to his relationship to his Jewish contemporaries” all exhibit a kind of intellectual and moral failure.
  • “I am born with a past, and to cut myself off from that past in the individualist mode, is to deform my present relationships” (p. 221).  For MacIntyre, there is no moral identity for the abstract individual; “The self has to find its moral identity in and through its membership in communities” (p. 221).
marleen_ueberall

Why Is Language Important to Culture? - 0 views

  • Why Is Language Important?
  • It is a uniquely human gift which lets us communicate and differentiates us from primates.
  • language is much more than just a means of communication. It is also an inseparable part of our culture.
  • ...22 more annotations...
  • different cultures have a predominant fashion in which they use their language and they have differences which cannot be underestimated.
  • one of the most well known linguists in the world, argues that all languages are dialects of one language, which is the human language. He says that even though they appear very different, they are in fact very similar.
  • there is no doubt that language and culture are closely connected.
  • Direct and Indirect Styles
  • We are encouraged to be direct and to speak our mind.
  • Asian cultures use an indirect style of communication. Words such as “perhaps" and “maybe" are used much more frequently than “yes", “no" or “for sure".
  • Personal and Contextual Styles
  • Two of the most frequently used words in our culture are “I" and “you".
  • American culture is not very formal, so it is appropriate to say “you" to your boss, to the President, to a stranger, to your spouse or to your child.
  • In Thai language there are twelve forms of the pronoun “you", which depend of factors such as status or level of intimacy.
  • The style of language is focused on speaker and depends on someone’s status and identity.
  • Japanese pay a lot of attention to someone’s status and they use linguistic forms called honorifics, which are used according to the rank of the person who is speaking
  • Untranslatable Words
  • Many people don’t realize that there are plenty of words that cannot be translated from one language to another simply because they don’t exist in another language.
  • The word “shopping", which describes one of the most favorite activities of Americans, doesn’t exist in some other languages (such as for example in Russian) as a noun.
  • Another interesting example is the word “ilunga".
  • Republic of Congo and is considered to be the most untranslatable word in the world.
  • a person who is ready to forgive any transgression a first and a second time, but never for a third time.
  • Language Is Changing Along with the Culture
  • he and his were used generically in English language.
  • Since the United States and most of the English-speaking Western Europe are becoming less and less male-dominant cultures, the grammar rules have been changed and new gender agreement rules were created.
  • Fifty years ago nobody was suspecting that one day in the United States the words “mother" and “father" would become controversial and that some schools would agree to change them both to “parent".
krystalxu

Language Acquisition Theory | Simply Psychology - 0 views

  • After more than 60 years of research into child language development, the mechanism that enables children to segment syllables and words out of the strings of sounds they hear, and to acquire grammar to understand and produce language is still quite an enigma.
  • Skinner argued that children learn language based on behaviorist reinforcement principles by associating words with meanings.
  • children will never acquire the tools needed for processing an infinite number of sentences if the language acquisition mechanism was dependent on language input alone.
  • ...3 more annotations...
  • investigate the nature of these assumed grammatical categories and the research is still ongoing.
  • It is suggested that children are sensitive to patterns in language which enables the acquisition process.
  • . What is the exact process that transforms the child’s utterances into grammatically correct, adult-like speech? How much does the child need to be exposed to language to achieve the adult-like state?
manhefnawi

Linguists Say We Might Be Able to Communicate With Aliens If We Ever Encounter Them | Mental Floss - 0 views

  • His theory of universal grammar posits that there's a genetic component to language, and the ability to acquire and comprehend language is innate.
‹ Previous 21 - 40 of 44 Next ›
Showing 20 items per page