Skip to main content

Home/ TOK Friends/ Group items tagged conceptual

Rss Feed Group items tagged

Javier E

Philosophy Is Not a Science - NYTimes.com - 0 views

  • what objective knowledge can philosophy bring that is not already determinable by science?
  • numerous philosophers have come to believe, in concert with the prejudices of our age, that only science holds the potential to solve persistent philosophical mysteries as the nature of truth, life, mind, meaning, justice, the good and the beautiful.
  • myriad contemporary philosophers are perfectly willing to offer themselves up as intellectual servants or ushers of scientific progress. Their research largely functions as a spearhead for scientific exploration and as a balm for making those pursuits more palpable and palatable to the wider population.
  • ...13 more annotations...
  • While science and philosophy do at times overlap, they are fundamentally different approaches to understanding. So philosophers should not add to the conceptual confusion that subsumes all knowledge into science.
  • various disciplines we ordinarily treat as science are at least as — if not more —philosophical than scientific. Take for example mathematics, theoretical physics, psychology and economics. These are predominately rational conceptual disciplines. That is, they are not chiefly reliant on empirical observation. For unlike science, they may be conducted while sitting in an armchair with eyes closed.
  • unlike empirical observations, which may be mistaken or incomplete, philosophical findings depend primarily on rational and logical principles. As such, whereas science tends to alter and update its findings day to day through trial and error, logical deductions are timeless.
  • while mathematics is empirically testable at such rudimentary levels, it stops being so in its purest forms, like analysis and number theory. Proofs in these areas are conducted entirely conceptually
  • Logically fallacious arguments can be rather sophisticated and persuasive. But they are nevertheless invalid and always will be. Exposing such errors is part of philosophy’s stock and trade.
  • in ethics, science cannot necessarily tell us what to value
  • Ultimately as a result of Wittgenstein’s philosophy, we know that natural language is a public phenomenon that cannot logically be invented in isolation.
  • These are essentially conceptual clarifications. And as such, they are relatively timeless philosophical truths.
  • This is also why jurisprudence qualifies as an objective body of knowledge
  • Supreme Court justices are not so much scientific as philosophical experts on the nature of justice. And that is not to say their expertise does not count as genuine knowledge. In the best cases, it rises to the loftier level of wisdom
  • Though philosophy does sometimes employ thought experiments, these aren’t actually scientific, for they are conducted entirely in the imagination.
  • Wittgenstein showed that an ordinary word such as “game” is used consistently in myriad contrasting ways without possessing any essential unifying definition. Though this may seem impossible, the meaning of such terms is actually determined by their contextual usage
  • evidence of how most people happen to be does not necessarily tell us everything about how we should aspire to be. For how we should aspire to be is a conceptual question, namely, of how we ought to act, as opposed to an empirical question of how we do act.
Javier E

What Machines Can't Do - NYTimes.com - 0 views

  • certain mental skills will become less valuable because computers will take over. Having a great memory will probably be less valuable. Being able to be a straight-A student will be less valuable — gathering masses of information and regurgitating it back on tests. So will being able to do any mental activity that involves following a set of rules.
  • what human skills will be more valuable?
  • In the news business, some of those skills are already evident.
  • ...13 more annotations...
  • Technology has rewarded sprinters (people who can recognize and alertly post a message on Twitter about some interesting immediate event) and marathoners (people who can write large conceptual stories), but it has hurt middle-distance runners (people who write 800-word summaries of yesterday’s news conference).
  • Technology has rewarded graphic artists who can visualize data, but it has punished those who can’t turn written reporting into video presentations.
  • More generally, the age of brilliant machines seems to reward a few traits.
  • First, it rewards enthusiasm. The amount of information in front of us is practically infinite; so is that amount of data that can be collected with new tools. The people who seem to do best possess a voracious explanatory drive, an almost obsessive need to follow their curiosity.
  • Second, the era seems to reward people with extended time horizons and strategic discipline.
  • a human can provide an overall sense of direction and a conceptual frame. In a world of online distractions, the person who can maintain a long obedience toward a single goal, and who can filter out what is irrelevant to that goal, will obviously have enormous worth.
  • Third, the age seems to reward procedural architects. The giant Internet celebrities didn’t so much come up with ideas, they came up with systems in which other people could express ideas: Facebook, Twitter, Wikipedia, etc.
  • One of the oddities of collaboration is that tightly knit teams are not the most creative. Loosely bonded teams are, teams without a few domineering presences, teams that allow people to think alone before they share results with the group. So a manager who can organize a decentralized network around a clear question, without letting it dissipate or clump, will have enormous value.
  • Fifth, essentialists will probably be rewarded.
  • creativity can be described as the ability to grasp the essence of one thing, and then the essence of some very different thing, and smash them together to create some entirely new thing.
  • In the 1950s, the bureaucracy was the computer. People were organized into technocratic systems in order to perform routinized information processing.
  • now the computer is the computer. The role of the human is not to be dispassionate, depersonalized or neutral. It is precisely the emotive traits that are rewarded: the voracious lust for understanding, the enthusiasm for work, the ability to grasp the gist, the empathetic sensitivity to what will attract attention and linger in the mind.
  • Unable to compete when it comes to calculation, the best workers will come with heart in hand.
Javier E

Joshua Foer: John Quijada and Ithkuil, the Language He Invented : The New Yorker - 2 views

  • Languages are something of a mess. They evolve over centuries through an unplanned, democratic process that leaves them teeming with irregularities, quirks, and words like “knight.” No one who set out to design a form of communication would ever end up with anything like English, Mandarin, or any of the more than six thousand languages spoken today.“Natural languages are adequate, but that doesn’t mean they’re optimal,” John Quijada, a fifty-four-year-old former employee of the California State Department of Motor Vehicles, told me. In 2004, he published a monograph on the Internet that was titled “Ithkuil: A Philosophical Design for a Hypothetical Language.” Written like a linguistics textbook, the fourteen-page Web site ran to almost a hundred and sixty thousand words. It documented the grammar, syntax, and lexicon of a language that Quijada had spent three decades inventing in his spare time. Ithkuil had never been spoken by anyone other than Quijada, and he assumed that it never would be.
  • his “greater goal” was “to attempt the creation of what human beings, left to their own devices, would never create naturally, but rather only by conscious intellectual effort: an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language.”
  • Ithkuil, one Web site declared, “is a monument to human ingenuity and design.” It may be the most complete realization of a quixotic dream that has entranced philosophers for centuries: the creation of a more perfect language.
  • ...25 more annotations...
  • Since at least the Middle Ages, philosophers and philologists have dreamed of curing natural languages of their flaws by constructing entirely new idioms according to orderly, logical principles.
  • What if, they wondered, you could create a universal written language that could be understood by anyone, a set of “real characters,” just as the creation of Arabic numerals had done for counting? “This writing will be a kind of general algebra and calculus of reason, so that, instead of disputing, we can say that ‘we calculate,’ ” Leibniz wrote, in 1679.
  • nventing new forms of speech is an almost cosmic urge that stems from what the linguist Marina Yaguello, the author of “Lunatic Lovers of Language,” calls “an ambivalent love-hate relationship.” Language creation is pursued by people who are so in love with what language can do that they hate what it doesn’t. “I don’t believe any other fantasy has ever been pursued with so much ardor by the human spirit, apart perhaps from the philosopher’s stone or the proof of the existence of God; or that any other utopia has caused so much ink to flow, apart perhaps from socialism,”
  • Quijada began wondering, “What if there were one single language that combined the coolest features from all the world’s languages?”
  • Solresol, the creation of a French musician named Jean-François Sudre, was among the first of these universal languages to gain popular attention. It had only seven syllables: Do, Re, Mi, Fa, So, La, and Si. Words could be sung, or performed on a violin. Or, since the language could also be translated into the seven colors of the rainbow, sentences could be woven into a textile as a stream of colors.
  • “I had this realization that every individual language does at least one thing better than every other language,” he said. For example, the Australian Aboriginal language Guugu Yimithirr doesn’t use egocentric coördinates like “left,” “right,” “in front of,” or “behind.” Instead, speakers use only the cardinal directions. They don’t have left and right legs but north and south legs, which become east and west legs upon turning ninety degrees
  • Among the Wakashan Indians of the Pacific Northwest, a grammatically correct sentence can’t be formed without providing what linguists refer to as “evidentiality,” inflecting the verb to indicate whether you are speaking from direct experience, inference, conjecture, or hearsay.
  • In his “Essay Towards a Real Character, and a Philosophical Language,” from 1668, Wilkins laid out a sprawling taxonomic tree that was intended to represent a rational classification of every concept, thing, and action in the universe. Each branch along the tree corresponded to a letter or a syllable, so that assembling a word was simply a matter of tracing a set of forking limbs
  • he started scribbling notes on an entirely new grammar that would eventually incorporate not only Wakashan evidentiality and Guugu Yimithirr coördinates but also Niger-Kordofanian aspectual systems, the nominal cases of Basque, the fourth-person referent found in several nearly extinct Native American languages, and a dozen other wild ways of forming sentences.
  • he discovered “Metaphors We Live By,” a seminal book, published in 1980, by the cognitive linguists George Lakoff and Mark Johnson, which argues that the way we think is structured by conceptual systems that are largely metaphorical in nature. Life is a journey. Time is money. Argument is war. For better or worse, these figures of speech are profoundly embedded in how we think.
  • I asked him if he could come up with an entirely new concept on the spot, one for which there was no word in any existing language. He thought about it for a moment. “Well, no language, as far as I know, has a single word for that chin-stroking moment you get, often accompanied by a frown on your face, when someone expresses an idea that you’ve never thought of and you have a moment of suddenly seeing possibilities you never saw before.” He paused, as if leafing through a mental dictionary. “In Ithkuil, it’s ašţal.”
  • Neither Sapir nor Whorf formulated a definitive version of the hypothesis that bears their names, but in general the theory argues that the language we speak actually shapes our experience of reality. Speakers of different languages think differently. Stronger versions of the hypothesis go even further than this, to suggest that language constrains the set of possible thoughts that we can have. In 1955, a sociologist and science-fiction writer named James Cooke Brown decided he would test the Sapir-Whorf hypothesis by creating a “culturally neutral” “model language” that might recondition its speakers’ brains.
  • most conlangers come to their craft by way of fantasy and science fiction. J. R. R. Tolkien, who called conlanging his “secret vice,” maintained that he created the “Lord of the Rings” trilogy for the primary purpose of giving his invented languages, Quenya, Sindarin, and Khuzdul, a universe in which they could be spoken. And arguably the most commercially successful invented language of all time is Klingon, which has its own translation of “Hamlet” and a dictionary that has sold more than three hundred thousand copies.
  • He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.
  • Brown based the grammar for his ten-thousand-word language, called Loglan, on the rules of formal predicate logic used by analytical philosophers. He hoped that, by training research subjects to speak Loglan, he might turn them into more logical thinkers. If we could change how we think by changing how we speak, then the radical possibility existed of creating a new human condition.
  • today the stronger versions of the Sapir-Whorf hypothesis have “sunk into . . . disrepute among respectable linguists,” as Guy Deutscher writes, in “Through the Looking Glass: Why the World Looks Different in Other Languages.” But, as Deutscher points out, there is evidence to support the less radical assertion that the particular language we speak influences how we perceive the world. For example, speakers of gendered languages, like Spanish, in which all nouns are either masculine or feminine, actually seem to think about objects differently depending on whether the language treats them as masculine or feminine
  • The final version of Ithkuil, which Quijada published in 2011, has twenty-two grammatical categories for verbs, compared with the six—tense, aspect, person, number, mood, and voice—that exist in English. Eighteen hundred distinct suffixes further refine a speaker’s intent. Through a process of laborious conjugation that would befuddle even the most competent Latin grammarian, Ithkuil requires a speaker to home in on the exact idea he means to express, and attempts to remove any possibility for vagueness.
  • Every language has its own phonemic inventory, or library of sounds, from which a speaker can string together words. Consonant-poor Hawaiian has just thirteen phonemes. English has around forty-two, depending on dialect. In order to pack as much meaning as possible into each word, Ithkuil has fifty-eight phonemes. The original version of the language included a repertoire of grunts, wheezes, and hacks that are borrowed from some of the world’s most obscure tongues. One particular hard-to-make clicklike sound, a voiceless uvular ejective affricate, has been found in only a few other languages, including the Caucasian language Ubykh, whose last native speaker died in 1992.
  • Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.
  • “I wanted to use Ithkuil to show how you would discuss philosophy and emotional states transparently,” Quijada said. To attempt to translate a thought into Ithkuil requires investigating a spectrum of subtle variations in meaning that are not recorded in any natural language. You cannot express a thought without first considering all the neighboring thoughts that it is not. Though words in Ithkuil may sound like a hacking cough, they have an inherent and unavoidable depth. “It’s the ideal language for political and philosophical debate—any forum where people hide their intent or obfuscate behind language,” Quijada co
  • In Ithkuil, the difference between glimpsing, glancing, and gawking is the mere flick of a vowel. Each of these distinctions is expressed simply as a conjugation of the root word for vision. Hunched over the dining-room table, Quijada showed me how he would translate “gawk” into Ithkuil. First, though, since words in Ithkuil are assembled from individual atoms of meaning, he had to engage in some introspection about what exactly he meant to say.For fifteen minutes, he flipped backward and forward through his thick spiral-bound manuscript, scratching his head, pondering each of the word’s aspects, as he packed the verb with all of gawking’s many connotations. As he assembled the evolving word from its constituent meanings, he scribbled its pieces on a notepad. He added the “second degree of the affix for expectation of outcome” to suggest an element of surprise that is more than mere unpreparedness but less than outright shock, and the “third degree of the affix for contextual appropriateness” to suggest an element of impropriety that is less than scandalous but more than simply eyebrow-raising. As he rapped his pen against the notepad, he paged through his manuscript in search of the third pattern of the first stem of the root for “shock” to suggest a “non-volitional physiological response,” and then, after several moments of contemplation, he decided that gawking required the use of the “resultative format” to suggest “an event which occurs in conjunction with the conflated sense but is also caused by it.” He eventually emerged with a tiny word that hardly rolled off the tongue: apq’uxasiu. He spoke the first clacking syllable aloud a couple of times before deciding that he had the pronunciation right, and then wrote it down in the script he had invented for printed Ithkuil:
  • “You can make up words by the millions to describe concepts that have never existed in any language before,” he said.
  • Many conlanging projects begin with a simple premise that violates the inherited conventions of linguistics in some new way. Aeo uses only vowels. Kēlen has no verbs. Toki Pona, a language inspired by Taoist ideals, was designed to test how simple a language could be. It has just a hundred and twenty-three words and fourteen basic sound units. Brithenig is an answer to the question of what English might have sounded like as a Romance language, if vulgar Latin had taken root on the British Isles. Láadan, a feminist language developed in the early nineteen-eighties, includes words like radíidin, defined as a “non-holiday, a time allegedly a holiday but actually so much a burden because of work and preparations that it is a dreaded occasion; especially when there are too many guests and none of them help.”
  • “We think that when a person learns Ithkuil his brain works faster,” Vishneva told him, in Russian. She spoke through a translator, as neither she nor Quijada was yet fluent in their shared language. “With Ithkuil, you always have to be reflecting on yourself. Using Ithkuil, we can see things that exist but don’t have names, in the same way that Mendeleyev’s periodic table showed gaps where we knew elements should be that had yet to be discovered.”
  • Lakoff, who is seventy-one, bearded, and, like Quijada, broadly built, seemed to have read a fair portion of the Ithkuil manuscript and familiarized himself with the language’s nuances.“There are a whole lot of questions I have about this,” he told Quijada, and then explained how he felt Quijada had misread his work on metaphor. “Metaphors don’t just show up in language,” he said. “The metaphor isn’t in the word, it’s in the idea,” and it can’t be wished away with grammar.“For me, as a linguist looking at this, I have to say, ‘O.K., this isn’t going to be used.’ It has an assumption of efficiency that really isn’t efficient, given how the brain works. It misses the metaphor stuff. But the parts that are successful are really nontrivial. This may be an impossible language,” he said. “But if you think of it as a conceptual-art project I think it’s fascinating.”
Javier E

Kung Fu for Philosophers - NYTimes.com - 0 views

  • any ability resulting from practice and cultivation could accurately be said to embody kung fu.
  • the predominant orientation of traditional Chinese philosophy is the concern about how to live one’s life, rather than finding out the truth about reality.
  • Confucius’s call for “rectification of names” — one must use words appropriately — is more a kung fu method for securing sociopolitical order than for capturing the essence of things, as “names,” or words, are placeholders for expectations of how the bearer of the names should behave and be treated. This points to a realization of what J. L. Austin calls the “performative” function of language.
  • ...12 more annotations...
  • Instead of leading to a search for certainty, as Descartes’s dream did, Zhuangzi came to the realization that he had perceived “the transformation of things,” indicating that one should go along with this transformation rather than trying in vain to search for what is real.
  • the views of Mencius and his later opponent Xunzi’s views about human nature are more recommendations of how one should view oneself in order to become a better person than metaphysical assertions about whether humans are by nature good or bad. Though each man’s assertions about human nature are incompatible with each other, they may still function inside the Confucian tradition as alternative ways of cultivation.
  • The Buddhist doctrine of no-self surely looks metaphysical, but its real aim is to free one from suffering, since according to Buddhism suffering comes ultimately from attachment to the self. Buddhist meditations are kung fu practices to shake off one’s attachment, and not just intellectual inquiries for getting propositional truth.
  • The essence of kung fu — various arts and instructions about how to cultivate the person and conduct one’s life — is often hard to digest for those who are used to the flavor and texture of mainstream Western philosophy. It is understandable that, even after sincere willingness to try, one is often still turned away by the lack of clear definitions of key terms and the absence of linear arguments in classic Chinese texts. This, however, is not a weakness, but rather a requirement of the kung fu orientation — not unlike the way that learning how to swim requires one to focus on practice and not on conceptual understanding.
  • It even expands epistemology into the non-conceptual realm in which the accessibility of knowledge is dependent on the cultivation of cognitive abilities, and not simply on whatever is “publicly observable” to everyone. It also shows that cultivation of the person is not confined to “knowing how.” An exemplary person may well have the great charisma to affect others but does not necessarily know how to affect others.
  • Western philosophy at its origin is similar to classic Chinese philosophy. The significance of this point is not merely in revealing historical facts. It calls our attention to a dimension that has been eclipsed by the obsession with the search for eternal, universal truth and the way it is practiced, namely through rational arguments.
  • One might well consider the Chinese kung fu perspective a form of pragmatism.  The proximity between the two is probably why the latter was well received in China early last century when John Dewey toured the country. What the kung fu perspective adds to the pragmatic approach, however, is its clear emphasis on the cultivation and transformation of the person, a dimension that is already in Dewey and William James but that often gets neglected
  • A kung fu master does not simply make good choices and use effective instruments to satisfy whatever preferences a person happens to have. In fact the subject is never simply accepted as a given. While an efficacious action may be the result of a sound rational decision, a good action that demonstrates kung fu has to be rooted in the entire person, including one’s bodily dispositions and sentiments, and its goodness is displayed not only through its consequences but also in the artistic style one does it. It also brings forward what Charles Taylor calls the “background” — elements such as tradition and community — in our understanding of the formation of a person’s beliefs and attitudes. Through the kung fu approach, classic Chinese philosophy displays a holistic vision that brings together these marginalized dimensions and thereby forces one to pay close attention to the ways they affect each other.
  • This kung fu approach shares a lot of insights with the Aristotelian virtue ethics, which focuses on the cultivation of the agent instead of on the formulation of rules of conduct. Yet unlike Aristotelian ethics, the kung fu approach to ethics does not rely on any metaphysics for justification.
  • This approach opens up the possibility of allowing multiple competing visions of excellence, including the metaphysics or religious beliefs by which they are understood and guided, and justification of these beliefs is then left to the concrete human experiences.
  • it is more appropriate to consider kung fu as a form of art. Art is not ultimately measured by its dominance of the market. In addition, the function of art is not accurate reflection of the real world; its expression is not constrained to the form of universal principles and logical reasoning, and it requires cultivation of the artist, embodiment of virtues/virtuosities, and imagination and creativity.
  • If philosophy is “a way of life,” as Pierre Hadot puts it, the kung fu approach suggests that we take philosophy as the pursuit of the art of living well, and not just as a narrowly defined rational way of life.
kushnerha

Philosophy's True Home - The New York Times - 0 views

  • We’ve all heard the argument that philosophy is isolated, an “ivory tower” discipline cut off from virtually every other progress-making pursuit of knowledge, including math and the sciences, as well as from the actual concerns of daily life. The reasons given for this are many. In a widely read essay in this series, “When Philosophy Lost Its Way,” Robert Frodeman and Adam Briggle claim that it was philosophy’s institutionalization in the university in the late 19th century that separated it from the study of humanity and nature, now the province of social and natural sciences.
  • This institutionalization, the authors claim, led it to betray its central aim of articulating the knowledge needed to live virtuous and rewarding lives. I have a different view: Philosophy isn’t separated from the social, natural or mathematical sciences, nor is it neglecting the study of goodness, justice and virtue, which was never its central aim.
  • identified philosophy with informal linguistic analysis. Fortunately, this narrow view didn’t stop them from contributing to the science of language and the study of law. Now long gone, neither movement defined the philosophy of its day and neither arose from locating it in universities.
  • ...13 more annotations...
  • The authors claim that philosophy abandoned its relationship to other disciplines by creating its own purified domain, accessible only to credentialed professionals. It is true that from roughly 1930 to 1950, some philosophers — logical empiricists, in particular — did speak of philosophy having its own exclusive subject matter. But since that subject matter was logical analysis aimed at unifying all of science, interdisciplinarity was front and center.
  • Philosophy also played a role in 20th-century physics, influencing the great physicists Albert Einstein, Niels Bohr and Werner Heisenberg. The philosophers Moritz Schlick and Hans Reichenbach reciprocated that interest by assimilating the new physics into their philosophies.
  • developed ideas relating logic to linguistic meaning that provided a framework for studying meaning in all human languages. Others, including Paul Grice and J.L. Austin, explained how linguistic meaning mixes with contextual information to enrich communicative contents and how certain linguistic performances change social facts. Today a new philosophical conception of the relationship between meaning and cognition adds a further dimension to linguistic science.
  • Decision theory — the science of rational norms governing action, belief and decision under uncertainty — was developed by the 20th-century philosophers Frank Ramsey, Rudolph Carnap, Richard Jeffrey and others. It plays a foundational role in political science and economics by telling us what rationality requires, given our evidence, priorities and the strength of our beliefs. Today, no area of philosophy is more successful in attracting top young minds.
  • Philosophy also assisted psychology in its long march away from narrow behaviorism and speculative Freudianism. The mid-20th-century functionalist perspective pioneered by Hilary Putnam was particularly important. According to it, pain, pleasure and belief are neither behavioral dispositions nor bare neurological states. They are interacting internal causes, capable of very different physical realizations, that serve the goals of individuals in specific ways. This view is now embedded in cognitive psychology and neuroscience.
  • philosopher-mathematicians Gottlob Frege, Bertrand Russell, Kurt Gödel, Alonzo Church and Alan Turing invented symbolic logic, helped establish the set-theoretic foundations of mathematics, and gave us the formal theory of computation that ushered in the digital age
  • Philosophy of biology is following a similar path. Today’s philosophy of science is less accessible than Aristotle’s natural philosophy chiefly because it systematizes a larger, more technically sophisticated body of knowledge.
  • Philosophy’s interaction with mathematics, linguistics, economics, political science, psychology and physics requires specialization. Far from fostering isolation, this specialization makes communication and cooperation among disciplines possible. This has always been so.
  • Nor did scientific progress rob philosophy of its former scientific subject matter, leaving it to concentrate on the broadly moral. In fact, philosophy thrives when enough is known to make progress conceivable, but it remains unachieved because of methodological confusion. Philosophy helps break the impasse by articulating new questions, posing possible solutions and forging new conceptual tools.
  • Our knowledge of the universe and ourselves expands like a ripple surrounding a pebble dropped in a pool. As we move away from the center of the spreading circle, its area, representing our secure knowledge, grows. But so does its circumference, representing the border where knowledge blurs into uncertainty and speculation, and methodological confusion returns. Philosophy patrols the border, trying to understand how we got there and to conceptualize our next move.  Its job is unending.
  • Although progress in ethics, political philosophy and the illumination of life’s meaning has been less impressive than advances in some other areas, it is accelerating.
  • the advances in our understanding because of careful formulation and critical evaluation of theories of goodness, rightness, justice and human flourishing by philosophers since 1970 compare well to the advances made by philosophers from Aristotle to 1970
  • The knowledge required to maintain philosophy’s continuing task, including its vital connection to other disciplines, is too vast to be held in one mind. Despite the often-repeated idea that philosophy’s true calling can only be fulfilled in the public square, philosophers actually function best in universities, where they acquire and share knowledge with their colleagues in other disciplines. It is also vital for philosophers to engage students — both those who major in the subject, and those who do not. Although philosophy has never had a mass audience, it remains remarkably accessible to the average student; unlike the natural sciences, its frontiers can be reached in a few undergraduate courses.
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
huffem4

Dual Process Theory - Explanation and examples - Conceptually - 1 views

  • When we’re making decisions, we use two different systems of thinking. System 1 is our intuition or gut-feeling: fast, automatic, emotional, and subconscious. System 2 is slower and more deliberate: consciously working through different considerations, applying different concepts and models and weighing them all up.
  • One takeaway from the psychological research on dual process theory is that our System 1 (intuition) is more accurate in areas where we’ve gathered a lot of data with reliable and fast feedback, like social dynamics.
  • our System 2 tends to be better for decisions where we don’t have a lot of experience; involving numbers, statistics, logic, abstractions, or models; and phenomena our ancestors never dealt with.
  • ...1 more annotation...
  • You can also use both systems, acknowledging that you have an intuition, and feeding it into your System 2 model.
Javier E

Quitters Never Win: The Costs of Leaving Social Media - Woodrow Hartzog and Evan Seling... - 2 views

  • Manjoo offers this security-centric path for folks who are anxious about the service being "one the most intrusive technologies ever built," and believe that "the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club". Bottom line: stop tuning in and start dropping out if you suspect that the culture of oversharing, digital narcissism, and, above all, big-data-hungry, corporate profiteering will trump privacy settings.
  • Angwin plans on keeping a bare-bones profile. She'll maintain just enough presence to send private messages, review tagged photos, and be easy for readers to find. Others might try similar experiments, perhaps keeping friends, but reducing their communication to banal and innocuous expressions. But, would such disclosures be compelling or sincere enough to retain the technology's utility?
  • The other unattractive option is for social web users to willingly pay for connectivity with extreme publicity.
  • ...9 more annotations...
  • go this route if you believe privacy is dead, but find social networking too good to miss out on.
  • While we should be attuned to constraints and their consequences, there are at least four problems with conceptualizing the social media user's dilemma as a version of "if you can't stand the heat, get out of the kitchen".
  • The efficacy of abandoning social media can be questioned when others are free to share information about you on a platform long after you've left.
  • Second, while abandoning a single social technology might seem easy, this "love it or leave it" strategy -- which demands extreme caution and foresight from users and punishes them for their naivete -- isn't sustainable without great cost in the aggregate. If we look past the consequences of opting out of a specific service (like Facebook), we find a disconcerting and more far-reaching possibility: behavior that justifies a never-ending strategy of abandoning every social technology that threatens privacy -- a can being kicked down the road in perpetuity without us resolving the hard question of whether a satisfying balance between protection and publicity can be found online
  • if your current social network has no obligation to respect the obscurity of your information, what justifies believing other companies will continue to be trustworthy over time?
  • Sticking with the opt-out procedure turns digital life into a paranoid game of whack-a-mole where the goal is to stay ahead of the crushing mallet. Unfortunately, this path of perilously transferring risk from one medium to another is the direction we're headed if social media users can't make reasonable decisions based on the current context of obscurity, but instead are asked to assume all online social interaction can or will eventually lose its obscurity protection.
  • The fourth problem with the "leave if you're unhappy" ethos is that it is overly individualistic. If a critical mass participates in the "Opt-Out Revolution," what would happen to the struggling, the lonely, the curious, the caring, and the collaborative if the social web went dark?
  • Our point is that there is a middle ground between reclusion and widespread publicity, and the reduction of user options to quitting or coping, which are both problematic, need not be inevitable, especially when we can continue exploring ways to alleviate the user burden of retreat and the societal cost of a dark social web.
  • it is easy to presume that "even if you unfriend everybody on Facebook, and you never join Twitter, and you don't have a LinkedIn profile or an About.me page or much else in the way of online presence, you're still going to end up being mapped and charted and slotted in to your rightful place in the global social network that is life." But so long it remains possible to create obscurity through privacy enhancing technology, effective regulation, contextually appropriate privacy settings, circumspect behavior, and a clear understanding of how our data can be accessed and processed, that fatalism isn't justified.
Javier E

Emmy Noether, the Most Significant Mathematician You've Never Heard Of - NYTimes.com - 0 views

  • Albert Einstein called her the most “significant” and “creative” female mathematician of all time, and others of her contemporaries were inclined to drop the modification by sex. She invented a theorem that united with magisterial concision two conceptual pillars of physics: symmetry in nature and the universal laws of conservation. Some consider Noether’s theorem, as it is now called, as important as Einstein’s theory of relativity; it undergirds much of today’s vanguard research in physics
  • At Göttingen, she pursued her passion for mathematical invariance, the study of numbers that can be manipulated in various ways and still remain constant. In the relationship between a star and its planet, for example, the shape and radius of the planetary orbit may change, but the gravitational attraction conjoining one to the other remains the same — and there’s your invariance.
  • Noether’s theorem, an expression of the deep tie between the underlying geometry of the universe and the behavior of the mass and energy that call the universe home. What the revolutionary theorem says, in cartoon essence, is the following: Wherever you find some sort of symmetry in nature, some predictability or homogeneity of parts, you’ll find lurking in the background a corresponding conservation — of momentum, electric charge, energy or the like. If a bicycle wheel is radially symmetric, if you can spin it on its axis and it still looks the same in all directions, well, then, that symmetric translation must yield a corresponding conservation.
  • ...1 more annotation...
  • Noether’s theorem shows that a symmetry of time — like the fact that whether you throw a ball in the air tomorrow or make the same toss next week will have no effect on the ball’s trajectory — is directly related to the conservation of energy, our old homily that energy can be neither created nor destroyed but merely changes form.
Javier E

The Trouble With Neutrinos That Outpaced Einstein's Theory - NYTimes.com - 0 views

  • The British astrophysicist Arthur S. Eddington once wrote, “No experiment should be believed until it has been confirmed by theory.”
  • Adding to the sense of finality was the simple fact — as Eddington might have pointed out — that faster-than-light neutrinos had never been confirmed by theory. Or as John G. Learned, a neutrino physicist at the University of Hawaii, put it in an e-mail, “An interesting result of all this fracas is that no new model I have seen (or heard of from my friends) really is credible to explain the faster-than-light neutrinos.”
  • Eddington’s dictum is not as radical as it might sound. He made it after early measurements of the rate of expansion of the universe made it appear that our planet was older than the cosmos in which it resides — an untenable notion. “It means that science is not just a book of facts, it is understanding as well,”
  • ...1 more annotation...
  • If a “fact” cannot be understood, fitted into a conceptual framework that we have reason to believe in, or confirmed independently some other way, it risks becoming what journalists like to call a “permanent exclusive” — wrong.
Javier E

Summarizing EdTech in One Slide: Market, Open and Dewey - EdTech Researcher - Education... - 0 views

  • My job is to introduce participants to the diverse landscape of the field of education technology. One of the biggest problems in the ed-tech space right now is that the phrase "education technology" means very different things to different people and organizations. Here's a 2x2 model that summarizes (and, of course, oversimplifies) the entire education technology space:
  • There are two important questions to ask any ed tech organization or advocate: 1) Are you trying to make a billion dollars? and 2) Do you believe that learning occurs primarily through "delivery?" By answering those two questions, we can put everyone in the ed-tech field into one of three groups: Market, Open and Dewey.
  • The "Market" people are those that are trying to make a billion dollars and believe that learning is fundamentally a process of delivery. These people typically believe that free markets are the ultimate tool for optimizing all outcomes in society, and education should be no exception
  • ...8 more annotations...
  • The difference between Open and Market is that Open folks believe that learning objects are not commodities to be bought and sold, but the public infrastructure of our culture
  • the biggest players in the Open movement generally believe that learning is a process of algorthmically delivering learning objects to consumers, and they frequently use "supply and demand" models to conceptualize their efforts
  • They view learning as the process of delivering learning objects for the individual consumption of students, and they have great faith that this delivery process can be optimized by algorithms and data mining. It is incredibly important for them that we have quantifiable outcomes of learning (standardized tests), since they can only optimize on quantitative metrics.
  • They'd like learning objects and the algorithms distributing those objects to be openly licensed and free for teachers to reuse, remix, and re-publish.
  • The "Dewey" people reject the notion of learning as "delivery" and the free market as the best platform for learning.
  • Dewey is a complex figure, but when most people invoke him, they mean that learning occurs through people's experiences and not through content delivery
  • Learning occurs when teachers and students work together to create or make something with meaning to to people in the real world
  • They tend to believe that the nuanced, contextual, social experiences that lead to the best learning experiences are easiest to facilitate when the curriculum is not overly prescriptive.
Javier E

The Age of 'Infopolitics' - NYTimes.com - 0 views

  • we need a new way of thinking about our informational milieu. What we need is a concept of infopolitics that would help us understand the increasingly dense ties between politics and information
  • Infopolitics encompasses not only traditional state surveillance and data surveillance, but also “data analytics” (the techniques that enable marketers at companies like Target to detect, for instance, if you are pregnant), digital rights movements (promoted by organizations like the Electronic Frontier Foundation), online-only crypto-currencies (like Bitcoin or Litecoin), algorithmic finance (like automated micro-trading) and digital property disputes (from peer-to-peer file sharing to property claims in the virtual world of Second Life)
  • Surveying this iceberg is crucial because atop it sits a new kind of person: the informational person. Politically and culturally, we are increasingly defined through an array of information architectures: highly designed environments of data, like our social media profiles, into which we often have to squeeze ourselves
  • ...12 more annotations...
  • We have become what the privacy theorist Daniel Solove calls “digital persons.” As such we are subject to infopolitics (or what the philosopher Grégoire Chamayou calls “datapower,” the political theorist Davide Panagia “datapolitik” and the pioneering thinker Donna Haraway “informatics of domination”).
  • Once fingerprints, biometrics, birth certificates and standardized names were operational, it became possible to implement an international passport system, a social security number and all other manner of paperwork that tells us who someone is. When all that paper ultimately went digital, the reams of data about us became radically more assessable and subject to manipulation,
  • We like to think of ourselves as somehow apart from all this information. We are real — the information is merely about us.
  • But what is it that is real? What would be left of you if someone took away all your numbers, cards, accounts, dossiers and other informational prostheses? Information is not just about you — it also constitutes who you are.
  • We understandably do not want to see ourselves as bits and bytes. But unless we begin conceptualizing ourselves in this way, we leave it to others to do it for us
  • agencies and corporations will continue producing new visions of you and me, and they will do so without our input if we remain stubbornly attached to antiquated conceptions of selfhood that keep us from admitting how informational we already are.
  • What should we do about our Internet and phone patterns’ being fastidiously harvested and stored away in remote databanks where they await inspection by future algorithms developed at the National Security Agency, Facebook, credit reporting firms like Experian and other new institutions of information and control that will come into existence in future decades?
  • What bits of the informational you will fall under scrutiny? The political you? The sexual you? What next-generation McCarthyisms await your informational self? And will those excesses of oversight be found in some Senate subcommittee against which we democratic citizens might hope to rise up in revolt — or will they lurk among algorithmic automatons that silently seal our fates in digital filing systems?
  • Despite their decidedly different political sensibilities, what links together the likes of Senator Wyden and the international hacker network known as Anonymous is that they respect the severity of what is at stake in our information.
  • information is a site for the call of justice today, alongside more quintessential battlefields like liberty of thought and equality of opportunity.
  • we lack the intellectual framework to grasp the new kinds of political injustices characteristic of today’s information society.
  • though nearly all of us have a vague sense that something is wrong with the new regimes of data surveillance, it is difficult for us to specify exactly what is happening and why it raises serious concern
Javier E

Happiness Is a Warm iPhone - NYTimes.com - 0 views

  • We fall in love with our technology. That’s how we talk about our gadgets — with the language of emotional attachment, with irrational expectations about happily ever after.
  • I loved what was possible with it. Even though I wasn’t able to actually make it do anything, I knew that someone could. And that was enough, the mere idea of a machine, one that anyone could have in their home, that would take strings of symbols and turn them into music, into movement, into something else out there in the world.
  • We’re certainly into the magic zone — and yet, the magic is somehow fading for me. Technology has crossed the uncanny valley; it is simply too good at representing our real world.
  • ...3 more annotations...
  • since buying that first iPhone, I’ve grown too used to new worlds.
  • As everything gets faster and richer and denser with information, as a whole new dimension to our physical world evolves online, some possibilities open up, and others close down. The potential congeals into the actual, the possible calcifies into the practical. What is imaginable gets pared down into what was actually imagined
  • But things are by necessity amazing in a very specific way, and with a very specific visual grammar and conceptual environment — and that environment is one that is closed, controlled, packaged for us. We’re holding magic boxes, boxes that want to serve us and coddle us, instead of challenge us. And how can you love something that doesn’t challenge you?
Javier E

Interview: Ted Chiang | The Asian American Literary Review - 0 views

  • I think most people’s ideas of science fiction are formed by Hollywood movies, so they think most science fiction is a special effects-driven story revolving around a battle between good and evil
  • I don’t think of that as a science fiction story. You can tell a good-versus-evil story in any time period and in any setting. Setting it in the future and adding robots to it doesn’t make it a science fiction story.
  • I think science fiction is fundamentally a post-industrial revolution form of storytelling. Some literary critics have noted that the good-versus-evil story follows a pattern where the world starts out as a good place, evil intrudes, the heroes fight and eventually defeat evil, and the world goes back to being a good place. Those critics have said that this is fundamentally a conservative storyline because it’s about maintaining the status quo. This is a common story pattern in crime fiction, too—there’s some disruption to the order, but eventually order is restored. Science fiction offers a different kind of story, a story where the world starts out as recognizable and familiar but is disrupted or changed by some new discovery or technology. At the end of the story, the world is changed permanently. The original condition is never restored. And so in this sense, this story pattern is progressive because its underlying message is not that you should maintain the status quo, but that change is inevitable. The consequences of this new discovery or technology—whether they’re positive or negative—are here to stay and we’ll have to deal with them.
  • ...3 more annotations...
  • There’s also a subset of this progressive story pattern that I’m particularly interested in, and that’s the “conceptual breakthrough” story, where the characters discover something about the nature of the universe which radically expands their understanding of the world.  This is a classic science fiction storyline.
  • one of the cool things about science fiction is that it lets you dramatize the process of scientific discovery, that moment of suddenly understanding something about the universe. That is what scientists find appealing about science, and I enjoy seeing the same thing in science fiction.
  • when you mention myth or mythic structure, yes, I don’t think myths can do that, because in general, myths reflect a pre-industrial view of the world. I don’t know if there is room in mythology for a strong conception of the future, other than an end-of-the-world or Armageddon scenario …
Javier E

Belief Is the Least Part of Faith - NYTimes.com - 1 views

  • Why do people believe in God? What is our evidence that there is an invisible agent who has a real impact on our lives? How can those people be so confident?
  • These are the questions that university-educated liberals ask about faith. They are deep questions. But they are also abstract and intellectual. They are philosophical questions. In an evangelical church, the questions would probably have circled around how to feel God’s love and how to be more aware of God’s presence. Those are fundamentally practical questions.
  • The role of belief in religion is greatly overstated, as anthropologists have long known. In 1912, Émile Durkheim, one of the founders of modern social science, argued that religion arose as a way for social groups to experience themselves as groups. He thought that when people experienced themselves in social groups they felt bigger than themselves, better, more alive — and that they identified that aliveness as something supernatural. Religious ideas arose to make sense of this experience of being part of something greater. Durkheim thought that belief was more like a flag than a philosophical position: You don’t go to church because you believe in God; rather, you believe in God because you go to church.
  • ...4 more annotations...
  • In fact, you can argue that religious belief as we now conceptualize it is an entirely modern phenomenon. As the comparative religion scholar Wilfred Cantwell Smith pointed out, when the King James Bible was printed in 1611, “to believe” meant something like “to hold dear.” Smith, who died in 2000, once wrote: “The affirmation ‘I believe in God’ used to mean: ‘Given the reality of God as a fact of the universe, I hereby pledge to Him my heart and soul. I committedly opt to live in loyalty to Him. I offer my life to be judged by Him, trusting His mercy.’ Today the statement may be taken by some as meaning: ‘Given the uncertainty as to whether there be a God or not, as a fact of modern life, I announce that my opinion is yes.’ ”
  • secular Americans often think that the most important thing to understand about religion is why people believe in God, because we think that belief precedes action and explains choice. That’s part of our folk model of the mind: that belief comes first.
  • that was not really what I saw after my years spending time in evangelical churches. I saw that people went to church to experience joy and to learn how to have more of it. These days I find that it is more helpful to think about faith as the questions people choose to focus on, rather than the propositions observers think they must hold.
  • If you can sidestep the problem of belief — and the related politics, which can be so distracting — it is easier to see that the evangelical view of the world is full of joy. God is good. The world is good. Things will be good, even if they don’t seem good now. That’s what draws people to church. It is understandably hard for secular observers to sidestep the problem of belief. But it is worth appreciating that in belief is the reach for joy, and the reason many people go to church in the first place.
summertyler

Is It Ordinary Memory Loss, or Alzheimer's Disease? - NYTimes.com - 0 views

  • worried about her memory, wondering if she could have the beginnings of dementia
  • no more difficulty than the rest of us her age in remembering events, names and places, her physician suggested that, given her level of concern, she should have things checked out
  • two days of tests of her cognitive abilities
  • ...7 more annotations...
  • The result: reassurance and relief. Everything was in the normal range for her age, and she registered as superior on the ability to perform tasks and solve problems.
  • Simple tests done in eight to 12 minutes in a doctor’s office can determine whether memory issues are normal for one’s age or are problematic and warrant a more thorough evaluation.
  • more than half of older adults with signs of memory loss never see a doctor about it
  • “Early evaluation and identification of people with dementia may help them receive care earlier,”
  • “It can help families make plans for care, help with day-to-day tasks, including medication administration, and watch for future problems that can occur.”
  • Both tests measure orientation to time, date and place; attention and concentration; ability to calculate; memory; language; and conceptual thinking.
  • its score can be skewed by a person’s level of education, cultural background, a learning or speech disorder, and language fluency
  •  
    Memory loss is difficult to understand because of the many factors that affect it.
Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas ... - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
Javier E

Stoned - NYTimes.com - 0 views

  • Philosophy, among other things, is that living activity of critical reflection in a specific context, by which human beings strive to analyze the world in which they find themselves, and to question what passes for common sense or public opinion — what Socrates called doxa — in the particular society in which they live.
  • Philosophy, as the great American philosopher Stanley Cavell puts it, is the education of grownups.
  • As it functions in society, philosophy can also provide a method for debunking the many myths and ideologies that we live by and propose alternative conceptual or normative frameworks for thinking about concepts — justice, truth, freedom, the mind, science, religion — all of which have been debated over the past months in The Stone. Hegel says that philosophy can allow us to comprehend our time in thought. But it can also — perhaps more importantly — allow us to resist our time, to ask untimely questions, difficult, intractable and unfashionable questions. Nietzsche writes in a very late text, where he is still trying to wrestle himself free from the spell of his fascination with the composer Richard Wagner: What does a philosopher demand of himself first and last? To overcome his time in himself, to become “timeless.” With what must he therefore engage in the hardest combat? With whatever marks him as a child of his time. Well, then I am, no less than Wagner, a child of this time; that is, a decadent. But I comprehended this, I resisted it. The philosopher in me resisted.
Javier E

Book Review: The Moral Lives of Animals - WSJ.com - 0 views

  • have elucidated very real differences between human and nonhuman minds in the realm of conceptual reasoning, particularly with respect to what has been termed "theory of mind." This is the uniquely human ability to have thoughts about thoughts and to perceive that other minds exist and that they can hold ideas and beliefs different from one's own. While human and animal minds share a broadly similar ability to learn from experience, formulate intentions and store memories, careful experiments have repeatedly come up empty when attempting to establish the existence of a theory of mind in nonhumans.
  • A "theory of mind" is what makes it even possible to formulate abstract notions, to imagine the future, to try out ideas before acting upon them, to reflect about our own conduct and to see things from another's viewpoint. Charles Darwin observed that such a capacity is indeed the sine qua non of moral thought: "A moral being is one who is capable of reflecting on his past actions and their motives—of approving some and disapproving of others," he wrote in "The Descent of Man."
1 - 20 of 56 Next › Last »
Showing 20 items per page