Skip to main content

Home/ TOK Friends/ Contents contributed and discussions participated by kushnerha

Contents contributed and discussions participated by kushnerha

kushnerha

Consciousness Isn't a Mystery. It's Matter. - The New York Times - 3 views

  • Every day, it seems, some verifiably intelligent person tells us that we don’t know what consciousness is. The nature of consciousness, they say, is an awesome mystery. It’s the ultimate hard problem. The current Wikipedia entry is typical: Consciousness “is the most mysterious aspect of our lives”; philosophers “have struggled to comprehend the nature of consciousness.”
  • I find this odd because we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.
  • The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. (Richard Feynman’s remark about quantum theory — “I think I can safely say that nobody understands quantum mechanics” — seems as true as ever.) Or rather, more carefully: The nature of physical stuff is mysterious except insofar as consciousness is itself a form of physical stuff.
  • ...12 more annotations...
  • “We know nothing about the intrinsic quality of physical events,” he wrote, “except when these are mental events that we directly experience.”
  • I think Russell is right: Human conscious experience is wholly a matter of physical goings-on in the body and in particular the brain. But why does he say that we know nothing about the intrinsic quality of physical events except when these are mental events we directly experience? Isn’t he exaggerating? I don’t think so
  • I need to try to reply to those (they’re probably philosophers) who doubt that we really know what conscious experience is.The reply is simple. We know what conscious experience is because the having is the knowing: Having conscious experience is knowing what it is. You don’t have to think about it (it’s really much better not to). You just have to have it. It’s true that people can make all sorts of mistakes about what is going on when they have experience, but none of them threaten the fundamental sense in which we know exactly what experience is just in having it.
  • If someone continues to ask what it is, one good reply (although Wittgenstein disapproved of it) is “you know what it is like from your own case.” Ned Block replies by adapting the response Louis Armstrong reportedly gave to someone who asked him what jazz was: “If you gotta ask, you ain’t never going to know.”
  • So we all know what consciousness is. Once we’re clear on this we can try to go further, for consciousness does of course raise a hard problem. The problem arises from the fact that we accept that consciousness is wholly a matter of physical goings-on, but can’t see how this can be so. We examine the brain in ever greater detail, using increasingly powerful techniques like fMRI, and we observe extraordinarily complex neuroelectrochemical goings-on, but we can’t even begin to understand how these goings-on can be (or give rise to) conscious experiences.
  • 1966 movie “Fantastic Voyage,” or imagine the ultimate brain scanner. Leibniz continued, “Suppose we do: visiting its insides, we will never find anything but parts pushing each other — never anything that could explain a conscious state.”
  • His mistake is to go further, and conclude that physical goings-on can’t possibly be conscious goings-on. Many make the same mistake today — the Very Large Mistake (as Winnie-the-Pooh might put it) of thinking that we know enough about the nature of physical stuff to know that conscious experience can’t be physical. We don’t. We don’t know the intrinsic nature of physical stuff, except — Russell again — insofar as we know it simply through having a conscious experience.
  • We find this idea extremely difficult because we’re so very deeply committed to the belief that we know more about the physical than we do, and (in particular) know enough to know that consciousness can’t be physical. We don’t see that the hard problem is not what consciousness is, it’s what matter is — what the physical is.
  • This point about the limits on what physics can tell us is rock solid, and it arises before we begin to consider any of the deep problems of understanding that arise within physics — problems with “dark matter” or “dark energy,” for example — or with reconciling quantum mechanics and general relativity theory.
  • Those who make the Very Large Mistake (of thinking they know enough about the nature of the physical to know that consciousness can’t be physical) tend to split into two groups. Members of the first group remain unshaken in their belief that consciousness exists, and conclude that there must be some sort of nonphysical stuff: They tend to become “dualists.” Members of the second group, passionately committed to the idea that everything is physical, make the most extraordinary move that has ever been made in the history of human thought. They deny the existence of consciousness: They become “eliminativists.”
  • no one has to react in either of these ways. All they have to do is grasp the fundamental respect in which we don’t know the intrinsic nature of physical stuff in spite of all that physics tells us. In particular, we don’t know anything about the physical that gives us good reason to think that consciousness can’t be wholly physical. It’s worth adding that one can fully accept this even if one is unwilling to agree with Russell that in having conscious experience we thereby know something about the intrinsic nature of physical reality.
  • It’s not the physics picture of matter that’s the problem; it’s the ordinary everyday picture of matter. It’s ironic that the people who are most likely to doubt or deny the existence of consciousness (on the ground that everything is physical, and that consciousness can’t possibly be physical) are also those who are most insistent on the primacy of science, because it is precisely science that makes the key point shine most brightly: the point that there is a fundamental respect in which ultimate intrinsic nature of the stuff of the universe is unknown to us — except insofar as it is consciousness.
kushnerha

Facebook's Bias Is Built-In, and Bears Watching - The New York Times - 2 views

  • Facebook is the world’s most influential source of news.That’s true according to every available measure of size — the billion-plus people who devour its News Feed every day, the cargo ships of profit it keeps raking in, and the tsunami of online traffic it sends to other news sites.
  • But Facebook has also acquired a more subtle power to shape the wider news business. Across the industry, reporters, editors and media executives now look to Facebook the same way nesting baby chicks look to their engorged mother — as the source of all knowledge and nourishment, the model for how to behave in this scary new-media world. Case in point: The New York Times, among others, recently began an initiative to broadcast live video. Why do you suppose that might be? Yup, the F word. The deal includes payments from Facebook to news outlets, including The Times.
  • Yet few Americans think of Facebook as a powerful media organization, one that can alter events in the real world. When blowhards rant about the mainstream media, they do not usually mean Facebook, the mainstreamiest of all social networks. That’s because Facebook operates under a veneer of empiricism. Many people believe that what you see on Facebook represents some kind of data-mined objective truth unmolested by the subjective attitudes of fair-and-balanced human beings.
  • ...11 more annotations...
  • None of that is true. This week, Facebook rushed to deny a report in Gizmodo that said the team in charge of its “trending” news list routinely suppressed conservative points of view. Last month, Gizmodo also reported that Facebook employees asked Mark Zuckerberg, the social network’s chief executive, if the company had a responsibility to “help prevent President Trump in 2017.” Facebook denied it would ever try to manipulate elections.
  • Even if you believe that Facebook isn’t monkeying with the trending list or actively trying to swing the vote, the reports serve as timely reminders of the ever-increasing potential dangers of Facebook’s hold on the news.
  • The question isn’t whether Facebook has outsize power to shape the world — of course it does, and of course you should worry about that power. If it wanted to, Facebook could try to sway elections, favor certain policies, or just make you feel a certain way about the world, as it once proved it could do in an experiment devised to measure how emotions spread online.
  • There is no evidence Facebook is doing anything so alarming now. The danger is nevertheless real. The biggest worry is that Facebook doesn’t seem to recognize its own power, and doesn’t think of itself as a news organization with a well-developed sense of institutional ethics and responsibility, or even a potential for bias. Neither does its audience, which might believe that Facebook is immune to bias because it is run by computers.
  • That myth should die. It’s true that beyond the Trending box, most of the stories Facebook presents to you are selected by its algorithms, but those algorithms are as infused with bias as any other human editorial decision.
  • “With Facebook, humans are never not involved. Humans are in every step of the process — in terms of what we’re clicking on, who’s shifting the algorithms behind the scenes, what kind of user testing is being done, and the initial training data provided by humans.”Everything you see on Facebook is therefore the product of these people’s expertise and considered judgment, as well as their conscious and unconscious biases apart from possible malfeasance or potential corruption. It’s often hard to know which, because Facebook’s editorial sensibilities are secret. So are its personalities: Most of the engineers, designers and others who decide what people see on Facebook will remain forever unknown to its audience.
  • Facebook also has an unmistakable corporate ethos and point of view. The company is staffed mostly by wealthy coastal Americans who tend to support Democrats, and it is wholly controlled by a young billionaire who has expressed policy preferences that many people find objectionable.
  • You could argue that none of this is unusual. Many large media outlets are powerful, somewhat opaque, operated for profit, and controlled by wealthy people who aren’t shy about their policy agendas — Bloomberg News, The Washington Post, Fox News and The New York Times, to name a few.But there are some reasons to be even more wary of Facebook’s bias. One is institutional. Many mainstream outlets have a rigorous set of rules and norms about what’s acceptable and what’s not in the news business.
  • Those algorithms could have profound implications for society. For instance, one persistent worry about algorithmic-selected news is that it might reinforce people’s previously held points of view. If News Feed shows news that we’re each likely to Like, it could trap us into echo chambers and contribute to rising political polarization. In a study last year, Facebook’s scientists asserted the echo chamber effect was muted.
  • are Facebook’s engineering decisions subject to ethical review? Nobody knows.
  • The other reason to be wary of Facebook’s bias has to do with sheer size. Ms. Caplan notes that when studying bias in traditional media, scholars try to make comparisons across different news outlets. To determine if The Times is ignoring a certain story unfairly, look at competitors like The Washington Post and The Wall Street Journal. If those outlets are covering a story and The Times isn’t, there could be something amiss about the Times’s news judgment.Such comparative studies are nearly impossible for Facebook. Facebook is personalized, in that what you see on your News Feed is different from what I see on mine, so the only entity in a position to look for systemic bias across all of Facebook is Facebook itself. Even if you could determine the spread of stories across all of Facebook’s readers, what would you compare it to?
kushnerha

If Philosophy Won't Diversify, Let's Call It What It Really Is - The New York Times - 0 views

  • The vast majority of philosophy departments in the United States offer courses only on philosophy derived from Europe and the English-speaking world. For example, of the 118 doctoral programs in philosophy in the United States and Canada, only 10 percent have a specialist in Chinese philosophy as part of their regular faculty. Most philosophy departments also offer no courses on Africana, Indian, Islamic, Jewish, Latin American, Native American or other non-European traditions. Indeed, of the top 50 philosophy doctoral programs in the English-speaking world, only 15 percent have any regular faculty members who teach any non-Western philosophy.
  • Given the importance of non-European traditions in both the history of world philosophy and in the contemporary world, and given the increasing numbers of students in our colleges and universities from non-European backgrounds, this is astonishing. No other humanities discipline demonstrates this systematic neglect of most of the civilizations in its domain. The present situation is hard to justify morally, politically, epistemically or as good educational and research training practice.
  • While a few philosophy departments have made their curriculums more diverse, and while the American Philosophical Association has slowly broadened the representation of the world’s philosophical traditions on its programs, progress has been minimal.
  • ...9 more annotations...
  • Many philosophers and many departments simply ignore arguments for greater diversity; others respond with arguments for Eurocentrism that we and many others have refuted elsewhere. The profession as a whole remains resolutely Eurocentric.
  • Instead, we ask those who sincerely believe that it does make sense to organize our discipline entirely around European and American figures and texts to pursue this agenda with honesty and openness. We therefore suggest that any department that regularly offers courses only on Western philosophy should rename itself “Department of European and American Philosophy.”
  • We see no justification for resisting this minor rebranding (though we welcome opposing views in the comments section to this article), particularly for those who endorse, implicitly or explicitly, this Eurocentric orientation.
  • Some of our colleagues defend this orientation on the grounds that non-European philosophy belongs only in “area studies” departments, like Asian Studies, African Studies or Latin American Studies. We ask that those who hold this view be consistent, and locate their own departments in “area studies” as well, in this case, Anglo-European Philosophical Studies.
  • Others might argue against renaming on the grounds that it is unfair to single out philosophy: We do not have departments of Euro-American Mathematics or Physics. This is nothing but shabby sophistry. Non-European philosophical traditions offer distinctive solutions to problems discussed within European and American philosophy, raise or frame problems not addressed in the American and European tradition, or emphasize and discuss more deeply philosophical problems that are marginalized in Anglo-European philosophy. There are no comparable differences in how mathematics or physics are practiced in other contemporary cultures.
  • Of course, we believe that renaming departments would not be nearly as valuable as actually broadening the philosophical curriculum and retaining the name “philosophy.” Philosophy as a discipline has a serious diversity problem, with women and minorities underrepresented at all levels among students and faculty, even while the percentage of these groups increases among college students. Part of the problem is the perception that philosophy departments are nothing but temples to the achievement of males of European descent. Our recommendation is straightforward: Those who are comfortable with that perception should confirm it in good faith and defend it honestly; if they cannot do so, we urge them to diversify their faculty and their curriculum.
  • This is not to disparage the value of the works in the contemporary philosophical canon: Clearly, there is nothing intrinsically wrong with philosophy written by males of European descent; but philosophy has always become richer as it becomes increasingly diverse and pluralistic.
  • We hope that American philosophy departments will someday teach Confucius as routinely as they now teach Kant, that philosophy students will eventually have as many opportunities to study the “Bhagavad Gita” as they do the “Republic,” that the Flying Man thought experiment of the Persian philosopher Avicenna (980-1037) will be as well-known as the Brain-in-a-Vat thought experiment of the American philosopher Hilary Putnam (1926-2016), that the ancient Indian scholar Candrakirti’s critical examination of the concept of the self will be as well-studied as David Hume’s, that Frantz Fanon (1925-1961), Kwazi Wiredu (1931- ), Lame Deer (1903-1976) and Maria Lugones will be as familiar to our students as their equally profound colleagues in the contemporary philosophical canon. But, until then, let’s be honest, face reality and call departments of European-American Philosophy what they really are.
  • For demographic, political and historical reasons, the change to a more multicultural conception of philosophy in the United States seems inevitable. Heed the Stoic adage: “The Fates lead those who come willingly, and drag those who do not.”
kushnerha

A new atlas maps word meanings in the brain | PBS NewsHour - 0 views

  • like Google Maps for your cerebral cortex: A new interactive atlas, developed with the help of such unlikely tools as public radio podcasts and Wikipedia, purports to show which bits of your brain help you understand which types of concepts.
  • Hear a word relating to family, loss, or the passing of time — such as “wife,” “month,” or “remarried”— and a ridge called the right angular gyrus may be working overtime. Listening to your contractor talking about the design of your new front porch? Thank a pea-sized spot of brain behind your left ear.
  • The research on the “brain dictionary” has the hallmarks of a big scientific splash: Published on Wednesday in Nature, it’s accompanied by both a video and an interactive website where you can click your way from brain region to brain region, seeing what kinds of words are processed in each. Yet neuroscientists aren’t uniformly impressed.
  • ...9 more annotations...
  • invoked an old metaphor to explain why he isn’t convinced by the analysis: He compared it to establishing a theory of how weather works by pointing a video camera out the window for 7 hours.
  • Indeed, among neuroscientists, the new “comprehensive atlas” of the cerebral cortex is almost as controversial as a historical atlas of the Middle East. That’s because every word has a constellation of meanings and associations — and it’s hard for scientists to agree about how best to study them in the lab.
  • For this study, neuroscientist Jack Gallant and his team at the University of California, Berkeley played more than two hours’ worth of stories from the Moth Radio Hour for seven grad students and postdocs while measuring their cerebral blood flow using functional magnetic resonance imaging. Then, they linked the activity in some 50,000 pea-sized regions of the cortex to the “meaning” of the words being heard at that moment.
  • How, you might ask, did they establish the meaning of words? The neuroscientists pulled all the nouns and verbs from the podcasts. With a computer program, they then looked across millions of pages of text to see how often the words from the podcasts are used near 985 common words taken from Wikipedia’s List of 1,000 Basic Words. “Wolf,” for instance, would presumably be used more often in proximity to “dog” than to, say, “eggplant.” Using that data, the program assigned numbers that approximated the meaning of each individual word from the podcasts — and, with some fancy number crunching, they figured out what areas of the brain were activated when their research subjects heard words with certain meanings.
  • Everyone agrees that the research is innovative in its method. After all, linking up the meanings of thousands of words to the second-by-second brain activity in thousands of tiny brain regions is no mean feat. “That’s way more data than any human being can possibly think about,” said Gallant.
  • What they can’t agree on is what it means. “In this study, our goal was not to ask a specific question. Our goal was to map everything so that we can ask questions after that,” said Gallant. “One of the most frequent questions we get is, ‘What does it mean?’ If I gave you a globe, you wouldn’t ask what it means, you’d start using it for stuff. You can look for the smallest ocean or how long it will take to get to San Francisco.”
  • This “data-driven approach” still involves assumptions about how to break up language into different categories of meaning
  • “Of course it’s a very simplified version of how meaning is captured in our minds, but it seems to be a pretty good proxy,” she said.
  • hordes of unanswered questions: “We can map where your brain represents the meaning of a narrative text that is associated with family, but we don’t know why the brain is responding to family at that location. Is it the word ‘father’ itself? Is it your memories of your own father? Is it your own thinking about being a parent yourself?” He hopes that it’s just those types of questions that researchers will ask, using his brain map as a guide.
kushnerha

When Was America Greatest? - The New York Times - 0 views

  • The slogan evokes a time when America was stronger and more prosperous. But Mr. Trump doesn’t specify whether he’s expressing nostalgia for the 1950s — or 10 years ago. That vagueness is reflected by his voters, according to the results of a new survey, conducted online by the digital media and polling company Morning Consult.
  • Trump supporters offered a wide range of answers, with no distinct pattern. The most popular choice was the year 2000. But 1955, 1960, 1970 and 1985 were also popular. More than 2 percent of Trump’s supporters picked 2015, when Mr. Trump’s campaign began.
  • Political science research suggests that Americans’ optimism can be influenced by whether their political party is in the White House. So it’s perhaps not surprising that Democrats feel better than Republicans about current circumstances.
  • ...6 more annotations...
  • In March, Pew asked people whether life was better for people like them 50 years ago — and a majority of Republicans answered yes. Trump supporters were the most emphatic, with 75 percent saying things were better in the mid-1960s.Democrats, though, were less enthusiastic about the past. Forty-eight percent said life was better now than it was 50 years ago, while 17 percent of Democrats said it was the same, and only 28 percent said it was worse.
  • In the Morning Consult survey, 44 percent of people over all said America’s greatest years were ahead of it, while 36 percent said those years had already passed. But in an election when America’s past greatness has played such a starring role, we wanted to see more details about just how voters saw the past and the future.
  • So, when was the greatest year?Over all, 2000 was the most popular choice, a preference that cut across political party, candidate preference, gender and age. The year’s popularity may partly reflect people’s fondness for round numbers. But many voters explained their choice by referring to a greater sense of security. The Sept. 11 attacks occurred the following year. (An election year also has something for all partisans to grab onto. Bill Clinton was president that year, but George W. Bush won the election to replace him.)
  • Some people, of course, reached farther back into history. The year the Declaration of Independence was signed, 1776, got a few votes. One person chose 1789, the year the Constitution took effect. One person chose 1800. One chose 1860, the year Southern states began to secede from the Union. But most answers were of a more recent vintage.
  • partisan patterns in views of America’s greatness. Republicans, over all, recall the late 1950s and the mid-1980s most fondly. Sample explanations: “Reagan.” “Economy was booming.” “No wars!” “Life was simpler.” “Strong family values.” The distribution of Trump supporters’ greatest years is somewhat similar to the Republican trend, but more widely dispersed over the last 70 years.
  • Democrats seem to think America’s greatest days were more recent; they were more likely to pick a year in the 1990s, or since 2000. After 2000, their second-most-popular answer was 2016. Sample explanations: “We’re getting better.” “Improving social justice.” “Technology.” Even 2008, a year of financial collapse, was pretty popular, perhaps because President Obama was also elected that year.
kushnerha

Is That Even a Thing? - The New York Times - 3 views

  • Speakers and writers of American English have recently taken to identifying a staggering and constantly changing array of trends, events, memes, products, lifestyle choices and phenomena of nearly every kind with a single label — a thing.
  • It would be easy to call this a curiosity of the language and leave it at that. Linguistic trends come and go.
  • One could, on the other hand, consider the use of “a thing” a symptom of an entire generation’s linguistic sloth, general inarticulateness and penchant for cutesy, empty, half-ironic formulations that create a self-satisfied barrier preventing any form of genuine engagement with the world around them.
  • ...9 more annotations...
  • My assumption is that language and experience mutually influence each other. Language not only captures experience, it conditions it. It sets expectations for experience and gives shape to it as it happens. What might register as inarticulateness can reflect a different way of understanding and experiencing the world.
  • The word “thing” has of course long played a versatile and generic role in our language, referring both to physical objects and abstract matters. “The thing is …” “Here’s the thing.” “The play’s the thing.” In these examples, “thing” denotes the matter at hand and functions as stage setting to emphasize an important point. One new thing about “a thing,” then, is the typical use of the indefinite article “a” to precede it. We talk about a thing because we are engaged in cataloging. The question is whether something counts as a thing. “A thing” is not just stage setting. Information is conveyed.
  • What information? One definition of “a thing” that suggests itself right away is “cultural phenomenon.” A new app, an item of celebrity gossip, the practices of a subculture. It seems likely that “a thing” comes from the phrase the coolest/newest/latest thing. But now, in a society where everything, even the past, is new — “new thing” verges on the redundant. If they weren’t new they wouldn’t be things.
  • Clearly, cultural phenomena have long existed and been called “fads,” “trends,” “rages” or have been designated by the category they belong to — “product,” “fashion,” “lifestyle,” etc. So why the application of this homogenizing general term to all of them? I think there are four main reasons.
  • First, the flood of content into the cultural sphere. That we are inundated is well known. Information besieges us in waves that thrash us against the shore until we retreat to the solid ground of work or sleep or exercise or actual human interaction, only to wade cautiously back into our smartphones. As we spend more and more time online, it becomes the content of our experience, and in this sense “things” have earned their name. “A thing” has become the basic unit of cultural ontology.
  • Second, the fragmentation of this sphere. The daily barrage of culture requires that we choose a sliver of the whole in order to keep up. Netflix genres like “Understated Romantic Road Trip Movies” make it clear that the individual is becoming his or her own niche market — the converse of the celebrity as brand. We are increasingly a society of brands attuning themselves to markets, and markets evaluating brands. The specificity of the market requires a wider range of content — of things — to satisfy it
  • Third, the closing gap between satire and the real thing. The absurd excess of things has reached a point where the ironic detachment needed to cope with them is increasingly built into the things themselves, their marketing and the language we use to talk about them. The designator “a thing” is thus almost always tinged with ironic detachment. It puts the thing at arm’s length. You can hardly say “a thing” without a wary glint in your eye.
  • Finally, the growing sense that these phenomena are all the same. As we step back from “things,” they recede into the distance and begin to blur together. We call them all by the same name because they are the same at bottom: All are pieces of the Internet. A thing is for the most part experienced through this medium and generated by it. Even if they arise outside it, things owe their existence as things to the Internet. Google is thus always the arbiter of the question, “Is that a real thing?”
  • “A thing,” then, corresponds to a real need we have, to catalog and group together the items of cultural experience, while keeping them at a sufficient distance so that we can at least feign unified consciousness in the face of a world gone to pieces.
kushnerha

How 'Empowerment' Became Something for Women to Buy - The New York Times - 0 views

  • The mix of things presumed to transmit and increase female power is without limit yet still depressingly limiting.“Empowerment” wasn’t always so trivialized, or so corporate, or even so clamorously attached to women.
  • Four decades ago, the word had much more in common with Latin American liberation theology than it did with “Lean In.” In 1968, the Brazilian academic Paulo Freire coined the word “conscientization,” empowerment’s precursor, as the process by which an oppressed person perceives the structural conditions of his oppression and is subsequently able to take action against his oppressors.
  • Eight years later, the educator Barbara Bryant Solomon, writing about American black communities, gave this notion a new name, “empowerment.” It was meant as an ethos for social workers in marginalized communities, to discourage paternalism and encourage their clients to solve problems in their own ways. Then in 1981, Julian Rappaport, a psychologist, broadened the concept into a political theory of power that viewed personal competency as fundamentally limitless; it placed faith in the individual and laid at her feet a corresponding amount of responsibility too.
  • ...7 more annotations...
  • Sneakily, empowerment had turned into a theory that applied to the needy while describing a process more realistically applicable to the rich. The word was built on a misaligned foundation; no amount of awareness can change the fact that it’s the already-powerful who tend to experience empowerment at any meaningful rate. Today “empowerment” invokes power while signifying the lack of it. It functions like an explorer staking a claim on new territory with a white flag.
  • highly marketable “women’s empowerment,” neither practice nor praxis, nor really theory, but a glossy, dizzying product instead. Women’s empowerment borrows the virtuous window-dressing of the social worker’s doctrine and kicks its substance to the side. It’s about pleasure, not power; it’s individualistic and subjective, tailored to insecurity and desire.
  • The new empowerment doesn’t increase potential so much as it assures you that your potential is just fine. Even when the thing being described as “empowering” is personal and mildly defiant (not shaving, not breast-feeding, not listening to men, et cetera), what’s being mar­keted is a certain identity.
  • When consumer purchases aren’t made out to be a path to female empowerment, a branded corporate experience often is. There’s TEDWomen (“about the power of women”), the Forbes Women’s Summit (“#RedefinePower”) and Fortune’s Most Powerful Women Conference (tickets are $10,000).
  • This consumption-and-conference empowerment dilutes the word to pitch-speak, and the concept to something that imitates rather than alters the structures of the world. This version of empowerment can be actively disempowering: It’s a series of objects and experiences you can purchase while the conditions determining who can access and accumulate power stay the same. The ready partici­pation of well-off women in this strat­egy also points to a deep truth about the word “empowerment”: that it has never been defined by the people who actually need it. People who talk empowerment are, by definition, already there.
  • I have never said “empowerment” sincerely or heard it from a single one of my friends. The formulation has been diluted to something representational and bloodless — an architectural rendering of a building that will never be built.But despite its nonexistence in honest conversation, “empowerment” goes on thriving. It’s uniquely marketable, like the female body, which is where women’s empowerment is forced to live.
  • Like Sandberg, Kardashian is the apotheosis of a particular brand of largely contentless feminism, a celebratory form divorced from material politics, which makes it palatable — maybe irresistible — to the business world. Advertisement Continue reading the main story The mistake would be to locate further empowerment in choosing between the two. Corporate empowerment — as well as the lightweight, self-exculpatory feminism it rides on — feeds rav­enously on the distracting performance of identity, that buffet of false opposition.
kushnerha

The Psychology of Risk Perception Explains Why People Don't Fret the Pacific Northwest'... - 0 views

  • what psychology teaches us. Turns out most of us just aren’t that good at calculating risk, especially when it comes to huge natural events like earthquakes. That also means we’re not very good at mitigating those kinds of risks. Why? And is it possible to get around our short-sightedness, so that this time, we’re actually prepared? Risk perception is a vast, complex field of research. Here are just some of the core findings.
  • Studies show that when people calculate risk, especially when the stakes are high, we rely much more on feeling than fact. And we have trouble connecting emotionally to something scary if the odds of it happening today or tomorrow aren’t particularly high. So, if an earthquake, flood, tornado or hurricane isn’t immediately imminent, people are unlikely to act. “Perceiving risk is all about how scary or not do the facts feel,”
  • This feeling also relates to how we perceive natural, as opposed to human-made, threats. We tend to be more tolerant of nature than of other people who would knowingly impose risks upon us—terrorists being the clearest example. “We think that nature is out of our control—it’s not malicious, it’s not profiting from us, we just have to bear with it,”
  • ...8 more annotations...
  • And in many cases, though not all, people living in areas threatened by severe natural hazards do so by choice. If a risk has not been imposed on us, we take it much less seriously. Though Schulz’s piece certainly made a splash online, it is hard to imagine a mass exodus of Portlanders and Seattleites in response. Hey, they like it there.
  • They don’t have much to compare the future earthquake to. After all, there hasn’t been an earthquake or tsunami like it there since roughly 1700. Schulz poeticizes this problem, calling out humans for their “ignorance of or an indifference to those planetary gears which turn more slowly than our own.” Once again, this confounds our emotional connection to the risk.
  • The belief that an unlikely event won’t happen again for a while is called a gambler’s fallacy. Probability doesn’t work like that. The odds are the same with every roll of the dice.
  • But our “temporal parochialism,” as Schulz calls it, also undoes our grasp on probability. “We think probability happens with some sort of regularity or pattern,” says Ropeik. “If an earthquake is projected to hit within 50 years, when there hasn’t been one for centuries, we don’t think it’s going to happen.” Illogical thinking works in reverse, too: “If a minor earthquake just happened in Seattle, we think we’re safe.”
  • For individuals and government alike, addressing every point of concern requires a cost-benefit analysis. When kids barely have pencils and paper in schools that already exist, how much is appropriate to invest in earthquake preparedness? Even when that earthquake will kill thousands, displace millions, and cripple a region’s economy for decades to come—as Cascadia is projected to—the answer is complicated. “You immediately run into competing issues,” says Slovic. “When you’re putting resources into earthquake protection that has to be taken away from current social needs—that is a very difficult sell.”​
  • There are things people can do to combat our innate irrationality. The first is obvious: education. California has a seismic safety commission whose job is to publicize the risks of earthquakes and advocate for preparedness at household and state policy levels.
  • Another idea is similar to food safety ratings in the windows of some cities’ restaurants. Schulz reports that some 75 percent of Oregon’s structures aren’t designed to hold up to a really big Cascadia quake. “These buildings could have their risk and safety score publicly posted,” says Slovic. “That would motivate people to retrofit or mitigate those risks, particularly if they are schools.”
  • science points to a hard truth. Humans are simply inclined to be more concerned about what’s immediately in front of us: Snakes, fast-moving cars, unfamiliar chemical compounds in our breakfast cereal and the like will always elicit a quicker response than an abstract, far-off hazard.
kushnerha

In Science, It's Never 'Just a Theory' - The New York Times - 0 views

  • In everyday conversation, we tend to use the word “theory” to mean a hunch, an idle speculation, or a crackpot notion.
  • That’s not what “theory” means to scientists.“In science, the word theory isn’t applied lightly,” Kenneth R. Miller, a cell biologist at Brown University, said. “It doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.”
  • In 2002, the board of education in Cobb County, Ga., adopted the textbook but also required science teachers to put a warning sticker inside the cover of every copy.“Evolution is a theory, not a fact, regarding the origin of living things,” the sticker read, in part.In 2004, several Cobb County parents filed a lawsuit against the county board of education to have the stickers removed. They called Dr. Miller, who testified for about two hours, explaining, among other things, the strength of evidence for the theory of evolution.
  • ...2 more annotations...
  • It’s helpful, he argues, to think about theories as being like maps.“To say something is a map is not to say it’s a hunch,” said Dr. Godfrey-Smith, a professor at the City University of New York and the University of Sydney. “It’s an attempt to represent some territory.”A theory, likewise, represents a territory of science. Instead of rivers, hills, and towns, the pieces of the territory are facts.“To call something a map is not to say anything about how good it is,” Dr. Godfrey-Smith added. “There are fantastically good maps where there’s not a shred of doubt about their accuracy. And there are maps that are speculative.”
  • To judge a map’s quality, we can see how well it guides us through its territory. In a similar way, scientists test out new theories against evidence. Just as many maps have proven to be unreliable, many theories have been cast aside.But other theories have become the foundation of modern science, such as the theory of evolution, the general theory of relativity, the theory of plate tectonics, the theory that the sun is at the center of the solar system, and the germ theory of disease.“To the best of our ability, we’ve tested them, and they’ve held up,” said Dr. Miller. “And that’s why we’ve held on to these things.”
kushnerha

How Not to Explain Success - The New York Times - 0 views

  • DO you remember the controversy two years ago, when the Yale law professors Amy Chua (author of “Battle Hymn of the Tiger Mother”) and Jed Rubenfeld published “The Triple Package: How Three Unlikely Traits Explain the Rise and Fall of Cultural Groups in America”?We sure do. As psychologists, we found the book intriguing, because its topic — why some people succeed and others don’t — has long been a basic research question in social science, and its authors were advancing a novel argument. They contended that certain ethnic and religious minority groups (among them, Cubans, Jews and Indians) had achieved disproportionate success in America because their individual members possessed a combination of three specific traits: a belief that their group was inherently superior to others; a sense of personal insecurity; and a high degree of impulse control.
  • it offered no rigorous quantitative evidence to support its theory. This, of course, didn’t stop people from attacking or defending the book. But it meant that the debate consisted largely of arguments based on circumstantial evidence.
  • we took the time to empirically test the triple package hypothesis directly. Our results have just been published in the journal Personality and Individual Differences. We found scant evidence for Professors Chua and Rubenfeld’s theory.
  • ...7 more annotations...
  • We conducted two online surveys of a total of 1,258 adults in the United States. Each participant completed a variety of standard questionnaires to measure his or her impulsiveness, ethnocentrism and personal insecurity. (Professors Chua and Rubenfeld describe insecurity as “a goading anxiety about oneself and one’s place in society.” Since this concept was the most complex and counterintuitive element of their theory, we measured it several different ways, each of which captured a slightly different aspect.)Next, the participants completed a test of their cognitive abilities. Then they reported their income, occupation, education and other achievements, such as receiving artistic, athletic or leadership awards, all of which we combined to give each person a single score for overall success. Finally, our participants indicated their age, sex and parents’ levels of education.
  • First, the more successful participants had higher cognitive ability, more educated parents and better impulse control.
  • This finding is exactly what you would expect from accepted social science. Long before “The Triple Package,” researchers determined that the personality trait of conscientiousness, which encompasses the triple package’s impulse control component, was an important predictor of success — but that a person’s intelligence and socioeconomic background were equally or even more important.
  • Our second finding was that the more successful participants did not possess greater feelings of ethnocentrism or personal insecurity. In fact, for insecurity, the opposite was true: Emotional stability was related to greater success.
  • Finally, we found no special “synergy” among the triple package traits. According to Professors Chua and Rubenfeld, the three traits have to work together to create success — a sense of group superiority creates drive only in people who also view themselves as not good enough, for example, and drive is useless without impulse control. But in our data, people scoring in the top half on all three traits were no more successful than everyone else.
  • To be clear, we have no objection to Professors Chua and Rubenfeld’s devising a novel social-psychological theory of success. During the peer-review process before publication, our paper was criticized on the grounds that a theory created by law professors could not have contributed to empirical social science, and that ideas published in a popular book did not merit evaluation in an academic journal.We disagree. Outsiders can make creative and even revolutionary contributions to a discipline, as the psychologists Amos Tversky and Daniel Kahneman did for economics. And professors do not further the advancement of knowledge by remaining aloof from debates where they can apply their expertise. Researchers should engage the public, dispel popular myths and even affirm “common sense” when the evidence warrants.
  • it did not stand up to direct empirical tests. Our conclusion regarding “The Triple Package” is expressed by the saying, “What is new is not correct, and what is correct is not new.”
kushnerha

What Architecture Is Doing to Your Brain - CityLab - 1 views

  • Much of the student population would likely agree that the library’s menacing figure on the quad is nothing short of soul-crushing. New research conducted by a team of architects and neuroscientists suggests that architecture may indeed affect mental states, though they choose to focus on the positive.
  • I spoke with Dr. Julio Bermudez, the lead of a new study that uses fMRI to capture the effects of architecture on the brain. His team operates with the goal of using the scientific method to transform something opaque—the qualitative “phenomenologies of our built environment”—into neuroscientific observations that architects and city planners can deliberately design for. Bermudez and his team’s research question focuses on buildings and sites designed to elicit contemplation: They theorize that the presence of “contemplative architecture” in one’s environment may over time produce the same health benefits as traditional “internally induced” meditation, except with much less effort by the individual.
  • By showing 12 architects photos of contemplative and non-contemplative buildings from facade to interior, the researchers were able to observe the brain activity that occurred as subjects "imagined they were transported to the places being shown." All of the architects were white, right-handed men with no prior meditative training, creating the necessary (if comical) uniformity for neuroscientific research—the team wanted to ensure that the brain scans would not be influenced by factors unrelated to the photos, like gender, race, or handedness. For instance, the brain scans of left- and right-handed people often look different even when subjects are performing the same task.
  • ...8 more annotations...
  • In addition to posing an interesting control on the experiment, the decision to use architects was a strategic one meant to increase the researchers’ chances of achieving conclusive results. Though everyone encounters architecture, studies on the built environment struggle for funding because, as Bermudez remarked with a sigh, “it’s difficult to suggest that people are dying from it.” Architects were a natural choice for the pilot study because, the team reasoned, their critical training and experience would make them sensitive to features of the buildings that a lay person might overlook.
  • they deployed online surveys in Spanish and English to gather testimony on extraordinary architectural experiences (EAEs), or encounters with places that fundamentally alter one’s normal state of being. Critically, most of the buildings or sites mentioned in the 2,982 testimonies were designed with contemplation in mind, whether spiritual, aesthetic, religious, or symbolic, leading the researchers to conclude that “buildings may induce insightful, profound, and transformative contemplative states, [and] buildings designed to provoke contemplation seem to be succeeding”
  • Anticipating skeptics who would claim that these experiences are subjective, the researchers expanded the question to draw on the established neuroscientific subfield of meditation, with some important differences. Related studies to date have focused on internally produced states that are easily replicated in the lab, and on aesthetic evaluation, or the activity that occurs in the orbital frontal cortex as we make snap judgments about whether we find things ugly or beautiful.
  • Bermudez and his team expected that architecturally induced contemplative states would be strong, non-evaluative aesthetic experiences— eliciting more activity in areas associated with emotion and pleasure, but less activity in the orbital frontal cortex.
  • The presence of an external stimulus (the photos of the buildings) also removes the tedious self-regulation that occurs in the prefrontal cortex during traditional meditation. The interviews of the 12 subjects revealed that “peacefulness and relaxation, lessening of mind wandering, increasing of attention, and deepening of experience” were all common effects of viewing the photos—also common was a slight element of aesthetic judgment, seemingly inescapable in the crowd of critics.
  • The provisional conclusions of the study are that the brain behaves differently when exposed to contemplative and non-contemplative buildings, contemplative states elicited through “architectural aesthetics” are similar to the contemplation of traditional meditation in some ways, and different in other ways, and, finally, that “architectural design matters.”
  • reinforces a growing trend in architecture and design as researchers are beginning to study how the built environment affects the people who live in it. ANFA proclaims that “some observers have characterized what is happening in neuroscience as the most exciting frontier of human discovery since the Renaissance.”
  • gritty details: the optimal ceiling heights for different cognitive functions; the best city design for eliciting our natural exploratory tendencies and making way-finding easier; the ideal hospital layout to improve memory-related tasks in patients recovering from certain brain injuries; the influence of different types and quantities of light within a built space on mood and performance.  
kushnerha

Our Natural History, Endangered - The New York Times - 0 views

  • Worse, this rumored dustiness reinforces the widespread notion that natural history museums are about the past — just a place to display bugs and brontosaurs. Visitors may go there to be entertained, or even awe-struck, but they are often completely unaware that curators behind the scenes are conducting research into climate change, species extinction and other pressing concerns of our day. That lack of awareness is one reason these museums are now routinely being pushed to the brink. Even the National Science Foundation, long a stalwart of federal support for these museums, announced this month that it was suspending funding for natural history collections as it conducts a yearlong budget review.
  • It gets worse: A new Republican governor last year shut down the renowned Illinois State Museum, ostensibly to save the state $4.8 million a year. The museum pointed out that this would actually cost $33 million a year in lost tourism revenue and an untold amount in grants. But the closing went through, endangering a trove of 10 million artifacts, from mastodon bones to Native American tools, collected over 138 years, and now just languishing in the shuttered building. Eric Grimm, the museum’s director of science, characterized it as an act of “political corruption and malevolent anti-intellectualism.”
  • Other museums have survived by shifting their focus from research to something like entertainment.
  • ...9 more annotations...
  • The pandering can be insidious, too. The Perot Museum of Nature and Science in Dallas, which treats visitors to a virtual ride down a hydraulic fracturing well, recently made headlines for avoiding explicit references to climate change. Other museums omit scientific information on evolution. “We don’t need people to come in here and reject us,”
  • Even the best natural history museums have been obliged to reduce their scientific staff in the face of government cutbacks and the decline in donations following the 2008 economic crash. They still have their collections, and their public still comes through the door. But they no longer employ enough scientists to interpret those collections adequately for visitors or the world at large. Hence the journal Nature last year characterized natural history collections as “the endangered dead.”
  • these collections are less about the past than about our world and how it is changing. Sediment cores like the ones at the Illinois State Museum, for instance, may not sound terribly important, but the pollen in them reveals how past climates changed, what species lived and died as a result, and thus how our own future may be rapidly unfolding.
  • Natural history museums are so focused on the future that they have for centuries routinely preserved such specimens to answer questions they didn’t yet know how to ask, requiring methodologies that had not yet been invented, to make discoveries that would have been, for the original collectors, inconceivable.
  • THE people who first put gigantic mammoth and mastodon specimens in museums, for instance, did so mainly out of dumb wonderment. But those specimens soon led to the stunning 18th-century recognition that parts of God’s creation could become extinct. The heretical idea of extinction then became an essential preamble to Darwin, whose understanding of evolution by natural selection depended in turn on the detailed study of barnacle specimens collected and preserved over long periods and for no particular reason. Today, those same specimens continue to answer new questions with the help of genome sequencing, CT scans, stable isotope analysis and other technologies.
  • These museums also play a critical role in protecting what’s left of the natural world, in part because they often combine biological and botanical knowledge with broad anthropological experience.
  • “You have no nationality. You are scientists. You speak for nature.” Just since 1999, according to the Field Museum, inventories by its curators and their collaborators have been a key factor in the protection of 26.6 million acres of wilderness, mainly in the headwaters of the Amazon.
  • It may be optimistic to say that natural history museums have saved the world. It may even be too late for that. But they provide one other critical service that can save us, and our sense of wonder: Almost everybody in this country — even children in Denver who have never been to the Rocky Mountains, or people in San Francisco who have never walked on a Pacific Ocean beach — goes to a natural history museum at some point in his life, and these visits influence us in deep and unpredictable ways.
  • we dimly begin to understand the passage of time and cultures, and how our own species fits amid millions of others. We start to understand the strangeness and splendor of the only planet where we will ever have the great pleasure of living.
kushnerha

The rise of the 'gentleman's A' and the GPA arms race - The Washington Post - 2 views

  • A’s — once reserved for recognizing excellence and distinction — are today the most commonly awarded grades in America.
  • That’s true at both Ivy League institutions and community colleges, at huge flagship publics and tiny liberal arts schools, and in English, ethnic studies and engineering departments alike. Across the country, wherever and whatever they study, mediocre students are increasingly likely to receive supposedly superlative grades.
  • Analyzing 70 years of transcript records from more than 400 schools, the researchers found that the share of A grades has tripled, from just 15 percent of grades in 1940 to 45 percent in 2013. At private schools, A’s account for nearly a majority of grades awarded.
  • ...11 more annotations...
  • Students sometimes argue that their talents have improved so dramatically that they are deserving of higher grades. Past studies, however, have found little evidence of this.
  • While it’s true that top schools have become more selective, the overall universe of students attending college has gotten broader, reflecting a wider distribution of abilities and levels of preparation, especially at the bottom. College students today also study less and do not appear to be more literate than their predecessors were.
  • Plus, of course, even if students have gotten smarter, or at least more efficient at studying (hey, computers do help), grades are arguably also supposed to measure relative achievement among classmates.
  • Affirmative action also sometimes gets blamed for rising grades; supposedly, professors have been loath to hurt the feelings of underprepared minority students. Rojstaczer and Healy note, however, that much of the increase in minority enrollment occurred from the mid-1970s to mid-’80s, the only period in recent decades when average GPAs fell.
  • That first era, the researchers say, can be explained by changes in pedagogical philosophy (some professors began seeing grades as overly authoritarian and ineffective at motivating students) and mortal exigencies (male students needed higher grades to avoid the Vietnam draft).
  • The authors attribute today’s inflation to the consumerization of higher education. That is, students pay more in tuition, and expect more in return — better service, better facilities and better grades. Or at least a leg up in employment and graduate school admissions through stronger transcripts.
  • some universities have explicitly lifted their grading curves (sometimes retroactively) to make graduates more competitive in the job market, leading to a sort of grade inflation arms race
  • But rising tuition may not be the sole driver of students’ expectations for better grades, given that high school grades have also risen in recent decades. And rather than some top-down directive from administrators, grade inflation also seems related to a steady creep of pressure on professors to give higher grades in exchange for better teaching evaluations.
  • It’s unclear how the clustering of grades near the top is affecting student effort. But it certainly makes it harder to accurately measure how much students have learned. It also makes it more challenging for grad schools and employers to sort the superstars from the also-rans
  • Lax or at least inconsistent grading standards can also distort what students — especially women — choose to study, pushing them away from more stingily graded science, technology, engineering and math fields and into humanities, where high grades are easier to come by.
  • Without collective action — which means both standing up to students and publicly shaming other schools into adopting higher standards — the arms race will continue.
kushnerha

BBC - Future - Why does walking through doorways make us forget? - 0 views

  • We’ve all done it. Run upstairs to get your keys, but forget that it is them you’re looking for once you get to the bedroom. Open the fridge door and reach for the middle shelf only to realise that we can't remember why we opened the fridge in the first place. Or wait for a moment to interrupt a friend to find that the burning issue that made us want to interrupt has now vanished from our minds
  • It’s known as the “Doorway Effect”, and it reveals some important features of how our minds are organised. Understanding this might help us appreciate those temporary moments of forgetfulness as more than just an annoyance
  • “What are you doing today?” she asks the first. “I’m putting brick after sodding brick on top of another,” sighs the first. “What are you doing today?” she asks the second. “I’m building a wall,” is the simple reply. But the third builder swells with pride when asked, and replies: “I’m building a cathedral!”
  • ...7 more annotations...
  • Maybe you heard that story as encouragement to think of the big picture, but to the psychologist in you the important moral is that any action has to be thought of at multiple levels if you are going to carry it out successfully. The third builder might have the most inspiring view of their day-job, but nobody can build a cathedral without figuring out how to successfully put one brick on top of another like the first builder.
  • As we move through our days our attention shifts between these levels – from our goals and ambitions, to plans and strategies, and to the lowest levels, our concrete actions. When things are going well, often in familiar situations, we keep our attention on what we want and how we do it seems to take care of itself. If you’re a skilled driver then you manage the gears, indicators and wheel automatically, and your attention is probably caught up in the less routine business of navigating the traffic or talking to your passengers. When things are less routine we have to shift our attention to the details of what we’re doing, taking our minds off the bigger picture for a moment.
  • The way our attention moves up and down the hierarchy of action is what allows us to carry out complex behaviours, stitching together a coherent plan over multiple moments, in multiple places or requiring multiple actions.
  • The Doorway Effect occurs when our attention moves between levels, and it reflects the reliance of our memories – even memories for what we were about to do – on the environment we’re in.
  • Imagine that we’re going upstairs to get our keys and forget that it is the keys we came for as soon as we enter the bedroom. Psychologically, what has happened is that the plan (“Keys!”) has been forgotten even in the middle of implementing a necessary part of the strategy (“Go to bedroom!”). Probably the plan itself is part of a larger plan (“Get ready to leave the house!”), which is part of plans on a wider and wider scale (“Go to work!”, “Keep my job!”, “Be a productive and responsible citizen”, or whatever). Each scale requires attention at some point. Somewhere in navigating this complex hierarchy the need for keys popped into mind, and like a circus performer setting plates spinning on poles, your attention focussed on it long enough to construct a plan, but then moved on to the next plate
  • And sometimes spinning plates fall. Our memories, even for our goals, are embedded in webs of associations. That can be the physical environment in which we form them, which is why revisiting our childhood home can bring back a flood of previously forgotten memories, or it can be the mental environment – the set of things we were just thinking about when that thing popped into mind.
  • The Doorway Effect occurs because we change both the physical and mental environments, moving to a different room and thinking about different things. That hastily thought up goal, which was probably only one plate among the many we’re trying to spin, gets forgotten when the context changes.
kushnerha

Philosophy's True Home - The New York Times - 0 views

  • We’ve all heard the argument that philosophy is isolated, an “ivory tower” discipline cut off from virtually every other progress-making pursuit of knowledge, including math and the sciences, as well as from the actual concerns of daily life. The reasons given for this are many. In a widely read essay in this series, “When Philosophy Lost Its Way,” Robert Frodeman and Adam Briggle claim that it was philosophy’s institutionalization in the university in the late 19th century that separated it from the study of humanity and nature, now the province of social and natural sciences.
  • This institutionalization, the authors claim, led it to betray its central aim of articulating the knowledge needed to live virtuous and rewarding lives. I have a different view: Philosophy isn’t separated from the social, natural or mathematical sciences, nor is it neglecting the study of goodness, justice and virtue, which was never its central aim.
  • identified philosophy with informal linguistic analysis. Fortunately, this narrow view didn’t stop them from contributing to the science of language and the study of law. Now long gone, neither movement defined the philosophy of its day and neither arose from locating it in universities.
  • ...13 more annotations...
  • The authors claim that philosophy abandoned its relationship to other disciplines by creating its own purified domain, accessible only to credentialed professionals. It is true that from roughly 1930 to 1950, some philosophers — logical empiricists, in particular — did speak of philosophy having its own exclusive subject matter. But since that subject matter was logical analysis aimed at unifying all of science, interdisciplinarity was front and center.
  • Philosophy also played a role in 20th-century physics, influencing the great physicists Albert Einstein, Niels Bohr and Werner Heisenberg. The philosophers Moritz Schlick and Hans Reichenbach reciprocated that interest by assimilating the new physics into their philosophies.
  • developed ideas relating logic to linguistic meaning that provided a framework for studying meaning in all human languages. Others, including Paul Grice and J.L. Austin, explained how linguistic meaning mixes with contextual information to enrich communicative contents and how certain linguistic performances change social facts. Today a new philosophical conception of the relationship between meaning and cognition adds a further dimension to linguistic science.
  • Decision theory — the science of rational norms governing action, belief and decision under uncertainty — was developed by the 20th-century philosophers Frank Ramsey, Rudolph Carnap, Richard Jeffrey and others. It plays a foundational role in political science and economics by telling us what rationality requires, given our evidence, priorities and the strength of our beliefs. Today, no area of philosophy is more successful in attracting top young minds.
  • Philosophy also assisted psychology in its long march away from narrow behaviorism and speculative Freudianism. The mid-20th-century functionalist perspective pioneered by Hilary Putnam was particularly important. According to it, pain, pleasure and belief are neither behavioral dispositions nor bare neurological states. They are interacting internal causes, capable of very different physical realizations, that serve the goals of individuals in specific ways. This view is now embedded in cognitive psychology and neuroscience.
  • philosopher-mathematicians Gottlob Frege, Bertrand Russell, Kurt Gödel, Alonzo Church and Alan Turing invented symbolic logic, helped establish the set-theoretic foundations of mathematics, and gave us the formal theory of computation that ushered in the digital age
  • Philosophy of biology is following a similar path. Today’s philosophy of science is less accessible than Aristotle’s natural philosophy chiefly because it systematizes a larger, more technically sophisticated body of knowledge.
  • Philosophy’s interaction with mathematics, linguistics, economics, political science, psychology and physics requires specialization. Far from fostering isolation, this specialization makes communication and cooperation among disciplines possible. This has always been so.
  • Nor did scientific progress rob philosophy of its former scientific subject matter, leaving it to concentrate on the broadly moral. In fact, philosophy thrives when enough is known to make progress conceivable, but it remains unachieved because of methodological confusion. Philosophy helps break the impasse by articulating new questions, posing possible solutions and forging new conceptual tools.
  • Our knowledge of the universe and ourselves expands like a ripple surrounding a pebble dropped in a pool. As we move away from the center of the spreading circle, its area, representing our secure knowledge, grows. But so does its circumference, representing the border where knowledge blurs into uncertainty and speculation, and methodological confusion returns. Philosophy patrols the border, trying to understand how we got there and to conceptualize our next move.  Its job is unending.
  • Although progress in ethics, political philosophy and the illumination of life’s meaning has been less impressive than advances in some other areas, it is accelerating.
  • the advances in our understanding because of careful formulation and critical evaluation of theories of goodness, rightness, justice and human flourishing by philosophers since 1970 compare well to the advances made by philosophers from Aristotle to 1970
  • The knowledge required to maintain philosophy’s continuing task, including its vital connection to other disciplines, is too vast to be held in one mind. Despite the often-repeated idea that philosophy’s true calling can only be fulfilled in the public square, philosophers actually function best in universities, where they acquire and share knowledge with their colleagues in other disciplines. It is also vital for philosophers to engage students — both those who major in the subject, and those who do not. Although philosophy has never had a mass audience, it remains remarkably accessible to the average student; unlike the natural sciences, its frontiers can be reached in a few undergraduate courses.
kushnerha

There's nothing wrong with grade inflation - The Washington Post - 0 views

  • By the early ’90s, so long as one had the good sense to major in the humanities — all bets were off in the STEM fields — it was nearly impossible to get a final grade below a B-minus at an elite college. According to a 2012 study, the average college GPA, which in the 1930s was a C-plus, had risen to a B at public universities and a B-plus at private schools. At Duke, Pomona and Harvard, D’s and F’s combine for just 2 percent of all grades. A Yale report found that 62 percent of all Yale grades are A or A-minus. According to a 2013 article in the Harvard Crimson, the median grade at Harvard was an A-minus , while the most common grade was an A.
  • The result is widespread panic about grade inflation at elite schools. (The phenomenon is not as prevalent at community colleges and less-selective universities.) Some blame students’ consumer mentality, a few see a correlation with small class sizes (departments with falling enrollments want to keep students happy), and many cite a general loss of rigor in a touchy-feely age.
  • Yet whenever elite schools have tried to fight grade inflation, it’s been a mess. Princeton instituted strict caps on the number of high grades awarded, then abandoned the plan, saying the caps dissuaded applicants and made students miserable. At Wellesley, grade-inflated humanities departments mandated that the average result in their introductory and intermediate classes not exceed a B-plus. According to one study, enrollment fell by one-fifth, and students were 30 percent less likely to major in one of these subjects.
  • ...12 more annotations...
  • I liked the joy my students found when they actually earned a grade they’d been reaching for. But whereas I once thought we needed to contain grades, I now see that we may as well let them float skyward. If grade inflation is bad, fighting it is worse. Our goal should be ending the centrality of grades altogether. For years, I feared that a world of only A’s would mean the end of meaningful grades; today, I’m certain of it. But what’s so bad about that?
  • It’s easy to see why schools want to fight grade inflation. Grades should motivate certain students: those afraid of the stigma of a bad grade or those ambitious, by temperament or conditioning, to succeed in measurable ways. Periodic grading during a term, on quizzes, tests or papers, provides feedback to students, which should enable them to do better. And grades theoretically signal to others, such as potential employers or graduate schools, how well the student did. (Grade-point averages are also used for prizes and class rankings, though that doesn’t strike me as an important feature.)
  • But it’s not clear that grades work well as motivators. Although recent research on the effects of grades is limited, several studies in the 1970s, 1980s and 1990s measured how students related to a task or a class when it was graded compared to when it was ungraded. Overall, graded students are less interested in the topic at hand and — and, for obvious, common-sense reasons — more inclined to pick the easiest possible task when given the chance. In the words of progressive-education theorist Alfie Kohn, author of “The Homework Myth,” “the quality of learning declines” when grades are introduced, becoming “shallower and more superficial when the point is to get a grade.”
  • Even where grades can be useful, as in describing what material a student has mastered, they are remarkably crude instruments. Yes, the student who gets a 100 on a calculus exam probably grasps the material better than the student with a 60 — but only if she retains the knowledge, which grades can’t show.
  • I still can’t say very well what separates a B from an A. What’s more, I never see the kind of incompetence or impudence that would merit a D or an F. And now, in our grade-inflated world, it’s even harder to use grades to motivate, or give feedback, or send a signal to future employers or graduate schools.
  • According to a 2012 study by the Chronicle of Higher Education, GPA was seventh out of eight factors employers considered in hiring, behind internships, extracurricular activities and previous employment. Last year, Stanford’s registrar told the Chronicle about “a clamor” from employers “for something more meaningful” than the traditional transcript. The Lumina Foundation gave a$1.27 million grant to two organizations for college administrators working to develop better student records, with grades only one part of a student’s final profile.
  • Some graduate schools, too, have basically ditched grades. “As long as you don’t bomb and flunk out, grades don’t matter very much in M.F.A. programs,” the director of one creative-writing program told the New York Times. To top humanities PhD programs, letters of reference and writing samples matter more than overall GPA (although students are surely expected to have received good grades in their intended areas of study). In fact, it’s impossible to get into good graduate or professional schools without multiple letters of reference, which have come to function as the kind of rich, descriptive comments that could go on transcripts in place of grades.
  • suggests that GPAs serve not to validate students from elite schools but to keep out those from less-prestigious schools and large public universities, where grades are less inflated. Grades at community colleges “have actually dropped” over the years, according to Stuart Rojstaczer, a co-author of the 2012 grade-inflation study. That means we have two systems: one for students at elite schools, who get jobs based on references, prestige and connections, and another for students everywhere else, who had better maintain a 3.0. Grades are a tool increasingly deployed against students without prestige.
  • The trouble is that, while it’s relatively easy for smaller colleges to go grade-free, with their low student-to-teacher ratios, it’s tough for professors at larger schools, who must evaluate more students, more quickly, with fewer resources. And adjuncts teaching five classes for poverty wages can’t write substantial term-end comments, so grades are a necessity if they want to give any feedback at all.
  • It would mean hiring more teachers and paying them better (which schools should do anyway). And if transcripts become more textured, graduate-school admission offices and employers will have to devote more resources to reading them, and to getting to know applicants through interviews and letters of reference — a salutary trend that is underway already.
  • When I think about getting rid of grades, I think of happier students, with whom I have more open, democratic relationships. I think about being forced to pay more attention to the quiet ones, since I’ll have to write something truthful about them, too. I’ve begun to wonder if a world without grades may be one of those states of affairs (like open marriages, bicycle lanes and single-payer health care) that Americans resist precisely because they seem too good, suspiciously good. Nothing worth doing is supposed to come easy.
  • Alfie Kohn, too, sees ideology at work in the grade-inflation panic. “Most of what powers the arguments against grade inflation is a very right-wing idea that excellence consists in beating everyone else around you,” he says. “Even when you have sorted them — even when they get to Harvard! — we have to sort them again.” In other words, we can trust only a system in which there are clear winners and losers.
kushnerha

New Critique Sees Flaws in Landmark Analysis of Psychology Studies - The New York Times - 0 views

  • A landmark 2015 report that cast doubt on the results of dozens of published psychology studies has exposed deep divisions in the field, serving as a reality check for many working researchers but as an affront to others who continue to insist the original research was sound.
  • On Thursday, a group of four researchers publicly challenged the report, arguing that it was statistically flawed and, as a result, wrong.The 2015 report, called the Reproducibility Project, found that less than 40 studies in a sample of 100 psychology papers in leading journals held up when retested by an independent team. The new critique by the four researchers countered that when that team’s statistical methodology was adjusted, the rate was closer to 100 percent.Neither the original analysis nor the critique found evidence of fraud or manipulation of data.
  • “That study got so much press, and the wrong conclusions were drawn from it,” said Timothy D. Wilson, a professor of psychology at the University of Virginia and an author of the new critique. “It’s a mistake to make generalizations from something that was done poorly, and this we think was done poorly.”
  • ...6 more annotations...
  • countered that the critique was highly biased: “They are making assumptions based on selectively interpreting data and ignoring data that’s antagonistic to their point of view.”
  • The challenge comes as the field of psychology is facing a generational change, with young researchers beginning to share their data and study designs before publication, to improve transparency. Still, the new critique is likely to feed an already lively debate about how best to conduct and evaluate so-called replication projects of studies. Such projects are underway in several fields, scientists on both sides of the debate said.
  • “On some level, I suppose it is appealing to think everything is fine and there is no reason to change the status quo,” said Sanjay Srivastava, a psychologist at the University of Oregon, who was not a member of either team. “But we know too much, from many other sources, to put too much credence in an analysis that supports that remarkable conclusion.”
  • One issue the critique raised was how faithfully the replication team had adhered to the original design of the 100 studies it retested. Small alterations in design can make the difference between whether a study replicates or not, scientists say.
  • Another issue that the critique raised had to do with statistical methods. When Dr. Nosek began his study, there was no agreed-upon protocol for crunching the numbers. He and his team settled on five measures
  • He said that the original replication paper and the critique use statistical approaches that are “predictably imperfect” for this kind of analysis.One way to think about the dispute, Dr. Simohnson said, is that the original paper found that the glass was about 40 percent full, and the critique argues that it could be 100 percent full. In fact, he said in an email, “State-of-the-art techniques designed to evaluate replications say it is 40 percent full, 30 percent empty, and the remaining 30 percent could be full or empty, we can’t tell till we get more data.”
kushnerha

Learning a New Sport May Be Good for the Brain - The New York Times - 0 views

  • Learning in midlife to juggle, swim, ride a bicycle or, in my case, snowboard could change and strengthen the brain in ways that practicing other familiar pursuits such as crossword puzzles or marathon training will not, according to an accumulating body of research about the unique impacts of motor learning on the brain.
  • Such complex thinking generally is classified as “higher-order” cognition and results in activity within certain portions of the brain and promotes plasticity, or physical changes, in those areas. There is strong evidence that learning a second language as an adult, for instance, results in increased white matter in the parts of the brain known to be involved in language processing.
  • Regular exercise likewise changes the brain, as I frequently have written, with studies in animals showing that running and other types of physical activities increase the number of new brain cells created in parts of the brain that are integral to memory and thinking.
  • ...5 more annotations...
  • But the impacts of learning on one of the most primal portions of the brain have been surprisingly underappreciated, both scientifically and outside the lab. Most of us pay little attention to our motor cortex, which controls how well we can move.
  • We like watching athletes in action, he said. But most of us make little effort to hone our motor skills in adulthood, and very few of us try to expand them by, for instance, learning a new sport. We could be short-changing our brains. Past neurological studies in people have shown that learning a new physical skill in adulthood, such as juggling, leads to increases in the volume of gray matter in parts of the brain related to movement control.
  • Even more compelling, a 2014 study with mice found that when the mice were introduced to a complicated type of running wheel, in which the rungs were irregularly spaced so that the animals had to learn a new, stutter-step type of running, their brains changed significantly. Learning to use these new wheels led to increased myelination of neurons in the animals’ motor cortexes. Myelination is the process by which parts of a brain cell are insulated, so that the messages between neurons can proceed more quickly and smoothly.
  • Scientists once believed that myelination in the brain occurs almost exclusively during infancy and childhood and then slows or halts altogether. But the animals running on the oddball wheels showed notable increases in the myelination of the neurons in their motor cortex even though they were adults.
  • In other words, learning the new skill had changed the inner workings of the adult animals’ motor cortexes; practicing a well-mastered one had not. “We don’t know” whether comparable changes occur within the brains of grown people who take up a new sport or physical skill, Dr. Krakauer said. But it seems likely, he said. “Motor skills are as cognitively challenging” in their way as traditional brainteasers such as crossword puzzles or brain-training games, he said. So adding a new sport to your repertory should have salutary effects on your brain, and also, unlike computer-based games, provide all the physical benefits of exercise.
kushnerha

The Data Against Kant - The New York Times - 0 views

  • THE history of moral philosophy is a history of disagreement, but on one point there has been virtual unanimity: It would be absurd to suggest that we should do what we couldn’t possibly do.
  • This principle — that “ought” implies “can,” that our moral obligations can’t exceed our abilities — played a central role in the work of Immanuel Kant and has been widely accepted since.
  • His thought experiments go something like this: Suppose that you and a friend are both up for the same job in another city. She interviewed last weekend, and your flight for the interview is this evening. Your car is in the shop, though, so your friend promises to drive you to the airport. But on the way, her car breaks down — the gas tank is leaking — so you miss your flight and don’t get the job.Would it make any sense to tell your friend, stranded at the side of the road, that she ought to drive you to the airport? The answer seems to be an obvious no (after all, she can’t drive you), and most philosophers treat this as all the confirmation they need for the principle.Suppose, however, that the situation is slightly different. What if your friend intentionally punctures her own gas tank to make sure that you miss the flight and she gets the job? In this case, it makes perfect sense to insist that your friend still has an obligation to drive you to the airport. In other words, we might indeed say that someone ought to do what she can’t — if we’re blaming her.
  • ...5 more annotations...
  • In our study, we presented hundreds of participants with stories like the one above and asked them questions about obligation, ability and blame. Did they think someone should keep a promise she made but couldn’t keep? Was she even capable of keeping her promise? And how much was she to blame for what happened?
  • We found a consistent pattern, but not what most philosophers would expect. “Ought” judgments depended largely on concerns about blame, not ability. With stories like the one above, in which a friend intentionally sabotages you, 60 percent of our participants said that the obligation still held — your friend still ought to drive you to the airport. But with stories in which the inability to help was accidental, the obligation all but disappeared. Now, only 31 percent of our participants said your friend still ought to drive you.
  • Professor Sinnott-Armstrong’s unorthodox intuition turns out to be shared by hundreds of nonphilosophers. So who is right? The vast majority of philosophers, or our participants?One possibility is that our participants were wrong, perhaps because their urge to blame impaired the accuracy of their moral judgments. To test this possibility, we stacked the deck in the favor of philosophical orthodoxy: We had the participants look at cases in which the urge to assign blame would be lowest — that is, only the cases in which the car accidentally broke down. Even still, we found no relationship between “ought” and “can.” The only significant relationship was between “ought” and “blame.”
  • This finding has an important implication: Even when we say that someone has no obligation to keep a promise (as with your friend whose car accidentally breaks down), it seems we’re saying it not because she’s unable to do it, but because we don’t want to unfairly blame her for not keeping it. Again, concerns about blame, not about ability, dictate how we understand obligation.
  • While this one study alone doesn’t refute Kant, our research joins a recent salvo of experimental work targeting the principle that “ought” implies “can.” At the very least, philosophers can no longer treat this principle as obviously true.
kushnerha

BBC - Future - Will emoji become a new language? - 2 views

  • Emoji are now used in around half of every sentence on sites like Instagram, and Facebook looks set to introduce them alongside the famous “like” button as a way of expression your reaction to a post.
  • If you were to believe the headlines, this is just the tipping point: some outlets have claimed that emoji are an emerging language that could soon compete with English in global usage. To many, this would be an exciting evolution of the way we communicate; to others, it is linguistic Armageddon.
  • Do emoji show the same characteristics of other communicative systems and actual languages? And what do they help us to express that words alone can’t say?When emoji appear with text, they often supplement or enhance the writing. This is similar to gestures that appear along with speech. Over the past three decades, research has shown that our hands provide important information that often transcends and clarifies the message in speech. Emoji serve this function too – for instance, adding a kissy or winking face can disambiguate whether a statement is flirtatiously teasing or just plain mean.
  • ...17 more annotations...
  • This is a key point about language use: rarely is natural language ever limited to speech alone. When we are speaking, we constantly use gestures to illustrate what we mean. For this reason, linguists say that language is “multi-modal”. Writing takes away that extra non-verbal information, but emoji may allow us to re-incorporate it into our text.
  • Emoji are not always used as embellishments, however – sometimes, strings of the characters can themselves convey meaning in a longer sequence on their own. But to constitute their own language, they would need a key component: grammar.
  • A grammatical system is a set of constraints that governs how the meaning of an utterance is packaged in a coherent way. Natural language grammars have certain traits that distinguish them. For one, they have individual units that play different roles in the sequence – like nouns and verbs in a sentence. Also, grammar is different from meaning
  • When emoji are isolated, they are primarily governed by simple rules related to meaning alone, without these more complex rules. For instance, according to research by Tyler Schnoebelen, people often create strings of emoji that share a common meaning
  • This sequence has little internal structure; even when it is rearranged, it still conveys the same message. These images are connected solely by their broader meaning. We might consider them to be a visual list: “here are all things related to celebrations and birthdays.” Lists are certainly a conventionalised way of communicating, but they don’t have grammar the way that sentences do.
  • What if the order did matter though? What if they conveyed a temporal sequence of events? Consider this example, which means something like “a woman had a party where they drank, and then opened presents and then had cake”:
  • In all cases, the doer of the action (the agent) precedes the action. In fact, this pattern is commonly found in both full languages and simple communication systems. For example, the majority of the world’s languages place the subject before the verb of a sentence.
  • These rules may seem like the seeds of grammar, but psycholinguist Susan Goldin-Meadow and colleagues have found this order appears in many other systems that would not be considered a language. For example, this order appears when people arrange pictures to describe events from an animated cartoon, or when speaking adults communicate using only gestures. It also appears in the gesture systems created by deaf children who cannot hear spoken languages and are not exposed to sign languages.
  • describes the children as lacking exposure to a language and thus invent their own manual systems to communicate, called “homesigns”. These systems are limited in the size of their vocabularies and the types of sequences they can create. For this reason, the agent-act order seems not to be due to a grammar, but from basic heuristics – practical workarounds – based on meaning alone. Emoji seem to tap into this same system.
  • Nevertheless, some may argue that despite emoji’s current simplicity, this may be the groundwork for emerging complexity – that although emoji do not constitute a language at the present time, they could develop into one over time.
  • Could an emerging “emoji visual language” be developing in a similar way, with actual grammatical structure? To answer that question, you need to consider the intrinsic constraints on the technology itself.Emoji are created by typing into a computer like text. But, unlike text, most emoji are provided as whole units, except for the limited set of emoticons which convert to emoji, like :) or ;). When writing text, we use the building blocks (letters) to create the units (words), not by searching through a list of every whole word in the language.
  • emoji force us to convey information in a linear unit-unit string, which limits how complex expressions can be made. These constraints may mean that they will never be able to achieve even the most basic complexity that we can create with normal and natural drawings.
  • What’s more, these limits also prevent users from creating novel signs – a requisite for all languages, especially emerging ones. Users have no control over the development of the vocabulary. As the “vocab list” for emoji grows, it will become increasingly unwieldy: using them will require a conscious search process through an external list, not an easy generation from our own mental vocabulary, like the way we naturally speak or draw. This is a key point – it means that emoji lack the flexibility needed to create a new language.
  • we already have very robust visual languages, as can be seen in comics and graphic novels. As I argue in my book, The Visual Language of Comics, the drawings found in comics use a systematic visual vocabulary (such as stink lines to represent smell, or stars to represent dizziness). Importantly, the available vocabulary is not constrained by technology and has developed naturally over time, like spoken and written languages.
  • grammar of sequential images is more of a narrative structure – not of nouns and verbs. Yet, these sequences use principles of combination like any other grammar, including roles played by images, groupings of images, and hierarchic embedding.
  • measured participants’ brainwaves while they viewed sequences one image at a time where a disruption appeared either within the groupings of panels or at the natural break between groupings. The particular brainwave responses that we observed were similar to those that experimenters find when violating the syntax of sentences. That is, the brain responds the same way to violations of “grammar”, whether in sentences or sequential narrative images.
  • I would hypothesise that emoji can use a basic narrative structure to organise short stories (likely made up of agent-action sequences), but I highly doubt that they would be able to create embedded clauses like these. I would also doubt that you would see the same kinds of brain responses that we saw with the comic strip sequences.
1 - 20 of 59 Next › Last »
Showing 20 items per page