Skip to main content

Home/ TOK Friends/ Group items matching "requirement" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Do Your Friends Actually Like You? - The New York Times - 1 views

  • Recent research indicates that only about half of perceived friendships are mutual. That is, someone you think is your friend might not be so keen on you. Or, vice versa, as when someone you feel you hardly know claims you as a bestie.
  • “The notion of doing nothing but spending time in each other’s company has, in a way, become a lost art,” replaced by volleys of texts and tweets, Mr. Sharp said. “People are so eager to maximize efficiency of relationships that they have lost touch with what it is to be a friend.”
  • It’s a concern because the authenticity of one’s relationships has an enormous impact on one’s health and well-being.
  • ...11 more annotations...
  • The study analyzed friendship ties among 84 subjects (ages 23 to 38) in a business management class by asking them to rank one another on a five-point continuum of closeness from “I don’t know this person” to “One of my best friends.” The feelings were mutual 53 percent of the time while the expectation of reciprocity was pegged at 94 percent. This is consistent with data from several other friendship studies conducted over the past decade, encompassing more than 92,000 subjects, in which the reciprocity rates ranged from 34 percent to 53 percent.
  • “Friendship is difficult to describe,” said Alexander Nehamas, a professor of philosophy at Princeton, who in his latest book, “On Friendship,” spends almost 300 pages trying to do just that. “It’s easier to say what friendship is not and, foremost, it is not instrumental.”
  • It is not a means to obtain higher status, wangle an invitation to someone’s vacation home or simply escape your own boredom. Rather, Mr. Nehamas said, friendship is more like beauty or art, which kindles something deep within us and is “appreciated for its own sake.
  • “Treating friends like investments or commodities is anathema to the whole idea of friendship,” said Ronald Sharp, a professor of English at Vassar College, who teaches a course on the literature of friendship. “It’s not about what someone can do for you, it’s who and what the two of you become in each other’s presence.”
  • Some blame human beings’ basic optimism, if not egocentrism, for the disconnect between perceived and actual friendships. Others point to a misunderstanding of the very notion of friendship in an age when “friend” is used as a verb, and social inclusion and exclusion are as easy as a swipe or a tap on a smartphone screen.
  • By his definition, friends are people you take the time to understand and allow to understand you.
  • Because time is limited, so, too, is the number of friends you can have, according to the work of the British evolutionary psychologist Robin I.M. Dunbar. He describes layers of friendship, where the topmost layer consists of only one or two people, say a spouse and best friend with whom you are most intimate and interact daily. The next layer can accommodate at most four people for whom you have great affinity, affection and concern and who require weekly attention to maintain. Out from there, the tiers contain more casual friends with whom you invest less time and tend to have a less profound and more tenuous connection. Without consistent contact, they easily fall into the realm of acquaintance. You may be friendly with them but they aren’t friends.
  • “There is a limited amount of time and emotional capital we can distribute, so we only have five slots for the most intense type of relationship,” Mr. Dunbar said. “People may say they have more than five but you can be pretty sure they are not high-quality friendships.
  • Such boasting implies they have soul mates to spare in a culture where we are taught that leaning on someone is a sign of weakness and power is not letting others affect you. But friendship requires the vulnerability of caring as well as revealing things about yourself that don’t match the polished image in your Facebook profile or Instagram feed, said Mr. Nehamas at Princeton. Trusting that your bond will continue, and might even be strengthened, despite your shortcomings and inevitable misfortunes, he said, is a risk many aren’t willing to take.
  • According to medical experts, playing it safe by engaging in shallow, unfulfilling or nonreciprocal relationships has physical repercussions. Not only do the resulting feelings of loneliness and isolation increase the risk of death as much as smoking, alcoholism and obesity; you may also lose tone, or function, in the so-called smart vagus nerve, which brain researchers think allows us to be in intimate, supportive and reciprocal relationships in the first place.
  • In the presence of a true friend, Dr. Banks said, the smart or modulating aspect of the vagus nerve is what makes us feel at ease rather than on guard as when we are with a stranger or someone judgmental. It’s what enables us to feel O.K. about exposing the soft underbelly of our psyche and helps us stay engaged and present in times of conflict. Lacking authentic friendships, the smart vagus nerve is not exercised. It loses tone and one’s anxiety remains high, making abiding, deep connections difficult.
Javier E

Why Our Children Don't Think There Are Moral Facts - NYTimes.com - 1 views

  • I already knew that many college-aged students don’t believe in moral facts.
  • the overwhelming majority of college freshman in their classrooms view moral claims as mere opinions that are not true or are true only relative to a culture.
  • where is the view coming from?
  • ...32 more annotations...
  • the Common Core standards used by a majority of K-12 programs in the country require that students be able to “distinguish among fact, opinion, and reasoned judgment in a text.”
  • So what’s wrong with this distinction and how does it undermine the view that there are objective moral facts?
  • For example, many people once thought that the earth was flat. It’s a mistake to confuse truth (a feature of the world) with proof (a feature of our mental lives)
  • Furthermore, if proof is required for facts, then facts become person-relative. Something might be a fact for me if I can prove it but not a fact for you if you can’t. In that case, E=MC2 is a fact for a physicist but not for me.
  • worse, students are taught that claims are either facts or opinions. They are given quizzes in which they must sort claims into one camp or the other but not both. But if a fact is something that is true and an opinion is something that is believed, then many claims will obviously be both
  • How does the dichotomy between fact and opinion relate to morality
  • Kids are asked to sort facts from opinions and, without fail, every value claim is labeled as an opinion.
  • Here’s a little test devised from questions available on fact vs. opinion worksheets online: are the following facts or opinions? — Copying homework assignments is wrong. — Cursing in school is inappropriate behavior. — All men are created equal. — It is worth sacrificing some personal liberties to protect our country from terrorism. — It is wrong for people under the age of 21 to drink alcohol. — Vegetarians are healthier than people who eat meat. — Drug dealers belong in prison.
  • Our children deserve a consistent intellectual foundation. Facts are things that are true. Opinions are things we believe. Some of our beliefs are true. Others are not. Some of our beliefs are backed by evidence. Others are not.
  • In summary, our public schools teach students that all claims are either facts or opinions and that all value and moral claims fall into the latter camp. The punchline: there are no moral facts. And if there are no moral facts, then there are no moral truths.
  • It should not be a surprise that there is rampant cheating on college campuses: If we’ve taught our students for 12 years that there is no fact of the matter as to whether cheating is wrong, we can’t very well blame them for doing so later on.
  • If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?
  • the curriculum sets our children up for doublethink. They are told that there are no moral facts in one breath even as the next tells them how they ought to behave.
  • The answer? In each case, the worksheets categorize these claims as opinions. The explanation on offer is that each of these claims is a value claim and value claims are not facts. This is repeated ad nauseum: any claim with good, right, wrong, etc. is not a fact.
  • Professor McBrayer seems to miss the major point of the Common Core concern: can students distinguish between premises based on (reasonably construed) fact and premises based on emotion when evaluating conclusions? I would prefer that students learn to reason rather than be taught moral 'truth' that follows Professor McBrayer's logic.
  • The hard work lies not in recognizing that at least some moral claims are true but in carefully thinking through our evidence for which of the many competing moral claims is correct.
  • Moral truths are not the same as scientific truths or mathematical truths. Yet they may still be used a guiding principle for our individual lives as well as our laws.But there is equal danger of giving moral judgments the designation of truth as there is in not doing so. Many people believe that abortion is murder on the same level as shooting someone with a gun. But many others do not. So is it true that abortion is murder?Moral principles can become generally accepted and then form the basis for our laws. But many long accepted moral principles were later rejected as being faulty. "Separate but equal" is an example. Judging homosexual relationships as immoral is another example.
  • Whoa! That Einstein derived an equation is a fact. But the equation represents a theory that may have to be tweaked at some point in the future. It may be a fact that the equation foretold the violence of atomic explosions, but there are aspects of nature that elude the equation. Remember "the theory of everything?"
  • Here is a moral fact, this is a sermon masquerading as a philosophical debate on facts, opinions and truth. This professor of religion is asserting that the government via common core is teaching atheism via the opinion vs fact.He is arguing, in a dishonest form, that public schools should be teaching moral facts. Of course moral facts is code for the Ten Commandments.
  • As a fourth grade teacher, I try to teach students to read critically, including distinguishing between facts and opinions as they read (and have been doing this long before the Common Core arrived, by the way). It's not always easy for children to grasp the difference. I can only imagine the confusion that would ensue if I introduced a third category -- moral "facts" that can't be proven but are true nonetheless!
  • horrible acts occur not because of moral uncertainty, but because people are too sure that their views on morality are 100% true, and anyone who fails to recognize and submit themselves are heathens who deserve death.I can't think of any case where a society has suffered because people are too thoughtful and open-minded to different perspectives on moral truth.In any case, it's not an elementary school's job to teach "moral truths."
  • The characterization of moral anti-realism as some sort of fringe view in philosophy is misleading. Claims that can be true or false are, it seems, 'made true' by features of the world. It's not clear to many in philosophy (like me) just what features of the world could make our moral claims true. We are more likely to see people's value claims as making claims about, and enforcing conformity to, our own (contingent) social norms. This is not to hold, as Mr. McBrayer seems to think follows, that there are no reasons to endorse or criticize these social norms.
  • This is nonsense. Giving kids the tools to distinguish between fact and opinion is hard enough in an age when Republicans actively deny reality on Fox News every night. The last thing we need is to muddy their thinking with the concept of "moral facts."A fact is a belief that everyone _should_ agree upon because it is observable and testable. Morals are not agreed upon by all. Consider the hot button issue of abortion.
  • Truthfully, I'm not terribly concerned that third graders will end up taking these lessons in the definition of fact versus opinion to the extremes considered here, or take them as a license to cheat. That will come much later, when they figure out, as people always have, what they can get a way with. But Prof. McBrayer, with his blithe expectation that all the grownups know that there moral "facts"? He scares the heck out of me.
  • I've long chafed at the language of "fact" v. "opinion", which is grounded in a very particular, limited view of human cognition. In my own ethics courses, I work actively to undermine the distinction, focusing instead on considered judgment . . . or even more narrowly, on consideration itself. (See http://wp.me/p5Ag0i-6M )
  • The real waffle here is the very concept of "moral facts." Our statements of values, even very important ones are, obviously, not facts. Trying to dress them up as if they are facts, to me, argues for a pretty serious moral weakness on the part of those advancing the idea.
  • Our core values are not important because they are facts. They are important because we collectively hold them and cherish them. To lean on the false crutch of "moral facts" to admit the weakness of your own moral convictions.
  • I would like to believe that there is a core of moral facts/values upon which all humanity can agree, but it would be tough to identify exactly what those are.
  • For the the ancient philosophers, reality comprised the Good, the True, and the Beautiful (what we might now call ethics, science and art), seeing these as complementary and inseparable, though distinct, realms. With the ascendency of science in our culture as the only valid measure of reality to the detriment of ethics and art (that is, if it is not observable and provable, it is not real), we have turned the good and the beautiful into mere "social constructs" that have no validity on their own. While I am sympathetic in many ways with Dr. McBrayer's objections, I think he falls into the trap of discounting the Good and The Beautiful as valid in and of themselves, and tries, instead, to find ways to give them validity through the True. I think his argument would have been stronger had he used the language of validity rather than the language of truth. Goodness, Truth and Beauty each have their own validity, though interdependent and inseparable. When we artificially extract one of these and give it primacy, we distort reality and alienate ourselves from it.
  • Value claims are like any other claims: either true or false, evidenced or not.
  • Moral issues cannot scientifically be treated on the level that Prof. McBrayer is attempting to use in this column: true or false, fact or opinion or both. Instead, they should be treated as important characteristics of the systematic working of a society or of a group of people in general. One can compare the working of two groups of people: one in which e.g. cheating and lying is acceptable, and one in which they are not. One can use historical or model examples to show the consequences and the working of specific systems of morals. I think that this method - suitably adjusted - can be used even in second grade.
  • Relativism has nothing to do with liberalism. The second point is that I'm not sure it does all that much harm, because I have yet to encounter a student who thought that he or she had to withhold judgment on those who hold opposing political views!
Javier E

Googling Is Believing: Trumping the Informed Citizen - The New York Times - 1 views

  • Rubio’s Google gambit and Trump’s (non)reaction to it, reveals an interesting, and troubling, new change in attitude about a philosophical foundation of democracy: the ideal of an informed citizenry.
  • The idea is obvious: If citizens are going to make even indirect decisions about policy, we need to know the facts about the problem the policy is meant to rectify, and to be able to gain some understanding about how effective that policy would be.
  • Noam Chomsky argued in the 1980s that consent was being “manufactured” by Big Media — large consolidated content-delivery companies (like this newspaper) that could cause opinions to sway one way or the other at their whim.
  • ...13 more annotations...
  • searching the Internet can get you to information that would back up almost any claim of fact, no matter how unfounded. It is both the world’s best fact-checker and the world’s best bias confirmer — often at the same time.
  • Nor is it a coincidence that people are increasingly following the election on social media, using it both as the source of their information and as the way to get their view out. Consent is still being manufactured, but the manufacturing is being done willingly by us, usually intended for consumption by other people with whom we already agree, facts or no facts.
  • It really isn’t a surprise that Rubio would ask us to Google for certain facts; that’s how you and I know almost everything we know nowadays — it is a way of knowing that is so embedded into the very fabric of our lives that we don’t even notice it
  • The problem of course is that having more information available, even more accurate information, isn’t what is required by the ideal.
  • What is required is that people actually know and understand that information, and there are reasons to think we are no closer to an informed citizenry understood in that way than we ever have been. Indeed, we might be further away.
  • The worry is no longer about who controls content. It is about who controls the flow of that content.
  • the flow of digital information is just as prone to manipulation as its content
  • No wonder Trump and his followers on Twitter immediately shrugged off Rubio’s inconvenient truths; there is nothing to fear from information when counterinformation is just as plentiful.
  • The real worry concerns our faith in the ideal of an informed citizenry itself. That worry, as I see it, has two faces.
  • First, as Jason Stanley and others have emphasized recently, appeals to ideals can be used to undermine those very ideals.
  • The very availability of information can make us think that the ideal of the informed citizen is more realized than it is — and that, in turn, can actually undermine the ideal, making us less informed, simply because we think we know all we need to know already.
  • Second, the danger is that increasing recognition of the fact that Googling can get you wherever you want to go can make us deeply cynical about the ideal of an informed citizenry — for the simple reason that what counts as an “informed” citizen is a matter of dispute. We no longer disagree just over values. Nor do we disagree just over the facts. We disagree over whose source — whose fountain of facts — is the right one.
  • And once disagreement reaches that far down, the daylight of reason seems very far away indeed.
kushnerha

Is That Even a Thing? - The New York Times - 3 views

  • Speakers and writers of American English have recently taken to identifying a staggering and constantly changing array of trends, events, memes, products, lifestyle choices and phenomena of nearly every kind with a single label — a thing.
  • It would be easy to call this a curiosity of the language and leave it at that. Linguistic trends come and go.
  • One could, on the other hand, consider the use of “a thing” a symptom of an entire generation’s linguistic sloth, general inarticulateness and penchant for cutesy, empty, half-ironic formulations that create a self-satisfied barrier preventing any form of genuine engagement with the world around them.
  • ...9 more annotations...
  • My assumption is that language and experience mutually influence each other. Language not only captures experience, it conditions it. It sets expectations for experience and gives shape to it as it happens. What might register as inarticulateness can reflect a different way of understanding and experiencing the world.
  • The word “thing” has of course long played a versatile and generic role in our language, referring both to physical objects and abstract matters. “The thing is …” “Here’s the thing.” “The play’s the thing.” In these examples, “thing” denotes the matter at hand and functions as stage setting to emphasize an important point. One new thing about “a thing,” then, is the typical use of the indefinite article “a” to precede it. We talk about a thing because we are engaged in cataloging. The question is whether something counts as a thing. “A thing” is not just stage setting. Information is conveyed.
  • What information? One definition of “a thing” that suggests itself right away is “cultural phenomenon.” A new app, an item of celebrity gossip, the practices of a subculture. It seems likely that “a thing” comes from the phrase the coolest/newest/latest thing. But now, in a society where everything, even the past, is new — “new thing” verges on the redundant. If they weren’t new they wouldn’t be things.
  • Clearly, cultural phenomena have long existed and been called “fads,” “trends,” “rages” or have been designated by the category they belong to — “product,” “fashion,” “lifestyle,” etc. So why the application of this homogenizing general term to all of them? I think there are four main reasons.
  • First, the flood of content into the cultural sphere. That we are inundated is well known. Information besieges us in waves that thrash us against the shore until we retreat to the solid ground of work or sleep or exercise or actual human interaction, only to wade cautiously back into our smartphones. As we spend more and more time online, it becomes the content of our experience, and in this sense “things” have earned their name. “A thing” has become the basic unit of cultural ontology.
  • Second, the fragmentation of this sphere. The daily barrage of culture requires that we choose a sliver of the whole in order to keep up. Netflix genres like “Understated Romantic Road Trip Movies” make it clear that the individual is becoming his or her own niche market — the converse of the celebrity as brand. We are increasingly a society of brands attuning themselves to markets, and markets evaluating brands. The specificity of the market requires a wider range of content — of things — to satisfy it
  • Third, the closing gap between satire and the real thing. The absurd excess of things has reached a point where the ironic detachment needed to cope with them is increasingly built into the things themselves, their marketing and the language we use to talk about them. The designator “a thing” is thus almost always tinged with ironic detachment. It puts the thing at arm’s length. You can hardly say “a thing” without a wary glint in your eye.
  • Finally, the growing sense that these phenomena are all the same. As we step back from “things,” they recede into the distance and begin to blur together. We call them all by the same name because they are the same at bottom: All are pieces of the Internet. A thing is for the most part experienced through this medium and generated by it. Even if they arise outside it, things owe their existence as things to the Internet. Google is thus always the arbiter of the question, “Is that a real thing?”
  • “A thing,” then, corresponds to a real need we have, to catalog and group together the items of cultural experience, while keeping them at a sufficient distance so that we can at least feign unified consciousness in the face of a world gone to pieces.
Javier E

Who Needs Math? - The Monkey Cage - 1 views

  • by Larry Bartels on April 9, 2013
  • “When something new is encountered, the follow-up steps usually require mathematical and statistical methods to move the analysis forward.” At that point, he suggests finding a collaborator
  • But technical expertise in itself is of little avail: ”The annals of theoretical biology are clogged with mathematical models that either can be safely ignored or, when tested, fail. Possibly no more than 10% have any lasting value. Only those linked solidly to knowledge of real living systems have much chance of being used.”
  • ...5 more annotations...
  • . If you’re going to talk about economics at all, you need some sense of how magnitudes play off against each other, which is the only way to have a chance of seeing how the pieces fit together.
  • [M]aybe the thing to say is that higher math isn’t usually essential; arithmetic is.
  • My own work has become rather less mathematical over the course of my career. When people ask why, I usually say that as I have come to learn more about politics, the “sophisticated” wrinkles have seemed to distract more than they adde
  • “Seeing how the pieces fit together” requires “some sense of how magnitudes play off against each other.” But, paradoxically, ”higher math” can get in the way of “mathematical intuition” about magnitudes. Formal theory is often couched in purely qualitative terms: under such and such conditions, more X should produce more Y. And quantitative analysis—which ought to focus squarely on magnitudes—is less likely to do so the more it is justified and valued on technical rather than substantive grounds.
  • I recently spent some time doing an informal meta-analysis of studies of the impact of campaign advertising. At the heart of that literature is a pretty simple question: how much does one more ad contribute to the sponsoring candidate’s vote share? Alas, most of the studies I reviewed provided no intelligible answer to that question; and the correlation between methodological “sophistication” (logarithmic transformations, multinomial logits, fixed effects, distributed lag models) and intelligibility was decidedly negative. The authors of these studies rarely seemed to know or care what their results implied about the magnitude of the effect, as long as those results could be billed as “statistically significant.
Javier E

China: A Modern Babel - WSJ - 0 views

  • The oft-repeated claim that we must all learn Mandarin Chinese, the better to trade with our future masters, is one that readers of David Moser’s “A Billion Voices” will rapidly end up re-evaluating.
  • In fact, many Chinese don’t speak it: Even Chinese authorities quietly admit that only about 70% of the population speaks Mandarin, and merely one in 10 of those speak it fluently.
  • Mr. Moser presents a history of what is more properly called Putonghua, or “common speech,” along with a clear, concise and often amusing introduction to the limits of its spoken and written forms.
  • ...12 more annotations...
  • what Chinese schoolchildren are encouraged to think of as the longstanding natural speech of the common people is in fact an artificial hybrid, only a few decades old, although it shares a name—Mandarin—with the language of administration from imperial times. It’s a designed-by-committee camel of a language that has largely lost track of its past.
  • The idea of a national Chinese language began with the realization by the accidentally successful revolutionaries of 1911 that retaining control over a country speaking multiple languages and myriad dialects would necessitate reform. Long-term unification and the introduction of mass education would require a common language.
  • Whatever the province they originated from, the administrators of the now-toppled Great Qing Empire had all learned to communicate with one another in a second common language—Guanhua, China’s equivalent, in practical terms, of medieval Latin
  • To understand this highly compressed idiom required a considerable knowledge of the Chinese classics. Early Jesuit missionaries had labeled it Mandarin,
  • The committee decided that the four-tone dialect of the capital would be the base for a new national language but added a fifth tone whose use had lapsed in the north but not in southern dialects. The result was a language that no one actually spoke.
  • After the Communist victory of 1949, the process began all over again with fresh conferences, leading finally to the decision to use Beijing sounds, northern dialects and modern literature in the vernacular (of which there was very little) as a source of grammar.
  • This new spoken form is what is now loosely labeled Mandarin, still as alien to most Chinese as all the other Chinese languages.
  • A Latin alphabet system called Pinyin was introduced to help children learn to pronounce Chinese characters, but today it is usually abandoned after the first few years of elementary school.
  • The view that Mandarin is too difficult for mere foreigners to learn is essential to Chinese amour propre. But it is belied by the number of foreign high-school students who now learn the language by using Pinyin as a key to pronunciation —and who bask in the admiration they receive as a result.
  • Since 1949, the Chinese government, obsessed with promoting the image of a nation completely united in its love of the Communist Party, has decided that the Chinese people speak not several different languages but the same one in a variety of dialects. To say otherwise is to suggest, dangerously, that China is not one nation
  • Yet on Oct. 1, 1949, Mao Zedong announced the founding of the People’s Republic in a Hunan accent so thick that members of his audience subsequently differed about what he had said. He never mastered the Beijing sounds on which Putonghua is based, nor did Sichuanese-speaking Deng Xiaoping or most of his successors.
  • When Xi Jinping took power in 2012, many online commentators rejoiced. “At last! A Chinese leader who can speak Putonghua!” One leader down, only 400 million more common people to go.
sissij

Why Westerners and Easterners Really Do Think Differently | Big Think - 0 views

  • While the studies cover many different topics, the subject of an individualistic or holistic thinking style is noteworthy.
  • In one study, complex images were shown to test subjects from East Asia and North America. The scientists tracked the eye movements of the participants in order to gauge where their attention was focused. It was found that the Chinese participants spent more time looking at the background of the image, while the Americans tended to focus on the main object in the picture. Holistic and individualistic thinking manifested in one clear example.
  • Of course, these tendencies are generalizations.
  • ...2 more annotations...
  • One is that the staple food of a region may have something to do with it. This is excellently seen in China, where the northern half of the country grows wheat and the southern half grows rice. Rice growing is a labour intensive activity, that requires the coordination of several neighboring farms to do properly. Wheat farming, on the other hand, takes much less work and does not require coordination of irrigation systems in order to work.
  • Even today, more than 100 years after the colonization effort, the effects of living in a society that was so recently a frontier show up in individual and holistic thinking tests. With residents of Hokkaido demonstrating tendencies towards individualism to a larger extent than the rest of the Japanese population.
  •  
    I really like the author stating that "Of course, these tendencies are generalizations." This shows that this study is not to categorize people into east and west, two groups. But this tendency is worth-noticing. The trials presented in the article shows different possibilities of why there is a difference. I think this cultural difference is similar to why Australia has very distinctive animal compare to the other continents. Since in the ancient times, westerners and easterners are isolated to each other, they took different approaches to develop their civilization. However, I really like the author emphasizing that this difference is not a stereotyping, it is the result of population analysis and observations. --Sissi (2/6/2017)
dicindioha

What's at Stake in a Health Bill That Slashes the Safety Net - The New York Times - 0 views

  • It is startling to realize just how much the social safety net expanded during Barack Obama’s presidency. In 2016, means-tested entitlements like Medicaid and food stamps absorbed 3.8 percent of the nation’s gross domestic product, almost a full percentage point more than in 2008
  • Public social spending writ large — including health care, pensions, unemployment insurance, poverty alleviation and the like — reached 19.3 percent of G.D.P.
  • Government in the United States still spends less than most of its peers across the industrialized world to support the general welfare of its citizens.
  • ...11 more annotations...
  • Last week, President Trump’s sketch of a budget underscored how little interest he has in the nation’s social insurance programs — proposing to shift $54 billion next year to the military
  • Republicans in the House plan to vote this week to undo the Affordable Care Act. That law was Mr. Obama’s singular contribution toward an American welfare state, the biggest expansion of the nation’s safety net in half a century.
  • Welfare reform did hurt many poor people by converting antipoverty funds into block grants to the states. But it was accompanied by a big increase in the earned-income tax credit, the nation’s most effective antipoverty tool today.
  • “No other Congress or administration has ever put forward a plan with the intention of having fewer people covered.”
  • Under the House Republican plan, 24 million more Americans will lack health insurance by 2026, according to the nonpartisan Congressional Budget Office.
  • Millions of Americans — poor ones, mainly — will use much less health care. They will make fewer outpatient visits, have fewer mammograms and cholesterol checks.
  • In any event, public health insurance will take a big hit.
  • Who knows where this retrenchment takes the country? Maybe attaching a work requirement to Medicaid, as conservatives propose, will prod the poor to get a job. Or perhaps it will just cut more people from Medicaid’s rolls. Further up the income ladder, losing a job will become more costly when it means losing health insurance, too.
  • Might depression and mental health problems destabilize families, feeding down into the health, education and well-being of the next generation?
  • Yet it is worth remembering that among advanced nations, the United States is a laggard in life expectancy and has one of the highest infant mortality rates.
  • If American history provides any sort of guidance, it is that continuing to shred the social safety net will definitely make things worse.
  •  
    Directing spending away from American people and their access to healthcare is a definite possibility for Trump. It will be interesting to see the effect this has on the healthcare market and the American people. This article says it will probably hurt many poor people and decrease their health.
Javier E

Why It's OK to Let Apps Make You a Better Person - Evan Selinger - Technology - The Atlantic - 0 views

  • one theme emerges from the media coverage of people's relationships with our current set of technologies: Consumers want digital willpower. App designers in touch with the latest trends in behavioral modification--nudging, the quantified self, and gamification--and good old-fashioned financial incentive manipulation, are tackling weakness of will. They're harnessing the power of payouts, cognitive biases, social networking, and biofeedback. The quantified self becomes the programmable self.
  • the trend still has multiple interesting dimensions
  • Individuals are turning ever more aspects of their lives into managerial problems that require technological solutions. We have access to an ever-increasing array of free and inexpensive technologies that harness incredible computational power that effectively allows us to self-police behavior everywhere we go. As pervasiveness expands, so does trust.
  • ...20 more annotations...
  • Some embrace networked, data-driven lives and are comfortable volunteering embarrassing, real time information about what we're doing, whom we're doing it with, and how we feel about our monitored activities.
  • Put it all together and we can see that our conception of what it means to be human has become "design space." We're now Humanity 2.0, primed for optimization through commercial upgrades. And today's apps are more harbinger than endpoint.
  • philosophers have had much to say about the enticing and seemingly inevitable dispersion of technological mental prosthetic that promise to substitute or enhance some of our motivational powers.
  • beyond the practical issues lie a constellation of central ethical concerns.
  • they should cause us to pause as we think about a possible future that significantly increases the scale and effectiveness of willpower-enhancing apps. Let's call this hypothetical future Digital Willpower World and characterize the ethical traps we're about to discuss as potential general pitfalls
  • it is antithetical to the ideal of " resolute choice." Some may find the norm overly perfectionist, Spartan, or puritanical. However, it is not uncommon for folks to defend the idea that mature adults should strive to develop internal willpower strong enough to avoid external temptations, whatever they are, and wherever they are encountered.
  • In part, resolute choosing is prized out of concern for consistency, as some worry that lapse of willpower in any context indicates a generally weak character.
  • Fragmented selves behave one way while under the influence of digital willpower, but another when making decisions without such assistance. In these instances, inconsistent preferences are exhibited and we risk underestimating the extent of our technological dependency.
  • It simply means that when it comes to digital willpower, we should be on our guard to avoid confusing situational with integrated behaviors.
  • the problem of inauthenticity, a staple of the neuroethics debates, might arise. People might start asking themselves: Has the problem of fragmentation gone away only because devices are choreographing our behavior so powerfully that we are no longer in touch with our so-called real selves -- the selves who used to exist before Digital Willpower World was formed?
  • Infantalized subjects are morally lazy, quick to have others take responsibility for their welfare. They do not view the capacity to assume personal responsibility for selecting means and ends as a fundamental life goal that validates the effort required to remain committed to the ongoing project of maintaining willpower and self-control.
  • Michael Sandel's Atlantic essay, "The Case Against Perfection." He notes that technological enhancement can diminish people's sense of achievement when their accomplishments become attributable to human-technology systems and not an individual's use of human agency.
  • Borgmann worries that this environment, which habituates us to be on auto-pilot and delegate deliberation, threatens to harm the powers of reason, the most central component of willpower (according to the rationalist tradition).
  • In several books, including Technology and the Character of Contemporary Life, he expresses concern about technologies that seem to enhance willpower but only do so through distraction. Borgmann's paradigmatic example of the non-distracted, focally centered person is a serious runner. This person finds the practice of running maximally fulfilling, replete with the rewarding "flow" that can only comes when mind/body and means/ends are unified, while skill gets pushed to the limit.
  • Perhaps the very conception of a resolute self was flawed. What if, as psychologist Roy Baumeister suggests, willpower is more "staple of folk psychology" than real way of thinking about our brain processes?
  • novel approaches suggest the will is a flexible mesh of different capacities and cognitive mechanisms that can expand and contract, depending on the agent's particular setting and needs. Contrary to the traditional view that identifies the unified and cognitively transparent self as the source of willed actions, the new picture embraces a rather diffused, extended, and opaque self who is often guided by irrational trains of thought. What actually keeps the self and its will together are the given boundaries offered by biology, a coherent self narrative created by shared memories and experiences, and society. If this view of the will as an expa
  • nding and contracting system with porous and dynamic boundaries is correct, then it might seem that the new motivating technologies and devices can only increase our reach and further empower our willing selves.
  • "It's a mistake to think of the will as some interior faculty that belongs to an individual--the thing that pushes the motor control processes that cause my action," Gallagher says. "Rather, the will is both embodied and embedded: social and physical environment enhance or impoverish our ability to decide and carry out our intentions; often our intentions themselves are shaped by social and physical aspects of the environment."
  • It makes perfect sense to think of the will as something that can be supported or assisted by technology. Technologies, like environments and institutions can facilitate action or block it. Imagine I have the inclination to go to a concert. If I can get my ticket by pressing some buttons on my iPhone, I find myself going to the concert. If I have to fill out an application form and carry it to a location several miles away and wait in line to pick up my ticket, then forget it.
  • Perhaps the best way forward is to put a digital spin on the Socratic dictum of knowing myself and submit to the new freedom: the freedom of consuming digital willpower to guide me past the sirens.
Javier E

The Conservative War on Liberal Media Has a Long History - Nicole Hemmer - The Atlantic - 0 views

  • Ailes made conservative news popular and profitable, but he was not the first to mingle partisanship with news. The twinned concepts of balance and bias were not his legacy but his inheritance. Long before Fox News, before Ailes and Rush Limbaugh and Sean Hannity, there was a conservative media complex in the United States refining a theory of liberal media bias.
  • The idea of “fair and balanced” partisan media has its roots in the 1940s and 1950s. Human Events, the right-wing newsweekly founded in 1944, was dedicated to publishing the “facts” other outlets overlooked.
  • By the early 1960s, Human Events arrived at this formulation of its mission: In reporting the news, Human Events is objective; it aims for accurate representation of the facts. But it is not impartial. It looks at events through eyes that are biased in favor of limited constitutional government, local self-government, private enterprise, and individual freedom.
  • ...9 more annotations...
  • In distinguishing between objectivity and impartiality, Human Events’ editors created a space where “bias” was an appropriate journalistic value, one that could work in tandem with objectivity.
  • two events in the early 1960s convinced the right that creating conservative media wasn’t enough to achieve balance. Conservatives would also have to discredit existing media.
  • Conservative discontent with the FCC focused on the Fairness Doctrine
  • Conservatives felt the Fairness Doctrine unfairly tilted the playing field against them. Though devised to encourage controversial broadcasting, in practice the doctrine often led broadcasters to avoid controversy so they wouldn’t have to give away free airtime. To conservatives, avoiding controversy inevitably meant silencing right-wing voices.
  • the right repeatedly challenged the central assumptions the FCC—and Americans more broadly—made about journalism. For much of the 20th century, journalists cleaved to the idea of objectivity. Opinion and analysis had their place, but that place was distinct and separate from the news. Conservative broadcasts, on the other hand, were by their very nature opinion. Fairness dictated these partisan broadcasters provide airtime for a response.
  • Conservatives saw the media landscape differently. They viewed objectivity as a mask concealing entrenched liberal bias, hiding the slanted reporting that dominated American media. Because of this, the right believed fairness did not require a response to conservative broadcasts; conservative broadcasts were the response. Unable to bring the FCC around to their position, conservatives increasingly saw the commission as a powerful government agency dedicated to maintaining media’s liberal tilt.
  • In calling coverage of Goldwater “unfounded in fact,” Manion was making another argument to which conservatives anchored their charges of liberal bias: Established media did not just slant the news—they fabricated it. And if established media couldn’t be counted on for truth, the argument went, then surely they should be required to offer both sides of the argument. In the years that followed, conservatives began an active campaign against liberal bias
  • The combined forces of the administration and its conservative media-research wing had an effect. By 1971 CBS Radio had launched Spectrum, a debate show featuring conservatives like Stan Evans, James Kilpatrick, and Phyllis Schlafly. That same year 60 Minutes pitted conservative Kilpatrick against liberal Nicholas von Hoffman in a regular segment called “Point/Counterpoint.” By then, even the publisher of Human Events, in the midst of selling his paper as an alternative to liberal media, had to admit that conservatives were popping up all over established media—even the editorial pages of “that holy house organ of Liberalism—the New York Times.”
  • So balance and bias became part of the American news diet long before Ailes entered the conservative media game. Why does that matter? It makes Ailes’s successes at Fox News far more understandable—and far less Ailes-centric. By the time Ailes entered the game, the American right had spent a generation seeking out conservative alternatives to the “liberal media,” and America’s news media was already in the midst of a revolution that made Fox News possible.
Javier E

The Dangers of Certainty: A Lesson From Auschwitz - NYTimes.com - 0 views

  • in 1973, the BBC aired an extraordinary documentary series called “The Ascent of Man,” hosted by one Dr. Jacob Bronowski
  • It was not an account of human biological evolution, but cultural evolution — from the origins of human life in the Rift Valley to the shifts from hunter/gatherer societies,  to nomadism and then settlement and civilization, from agriculture and metallurgy to the rise and fall of empires: Assyria, Egypt, Rome.
  • The tone of the programs was rigorous yet permissive, playful yet precise, and always urgent, open and exploratory. I remember in particular the programs on the trial of Galileo, Darwin’s hesitancy about publishing his theory of evolution and the dizzying consequences of Einstein’s theory of relativity.
  • ...11 more annotations...
  • For Bronowski, science and art were two neighboring mighty rivers that flowed from a common source: the human imagination.
  • For Dr. Bronowski, there was no absolute knowledge and anyone who claims it — whether a scientist, a politician or a religious believer — opens the door to tragedy. All scientific information is imperfect and we have to treat it with humility. Such, for him, was the human condition.
  • This is the condition for what we can know, but it is also, crucially, a moral lesson. It is the lesson of 20th-century painting from Cubism onwards, but also that of quantum physics. All we can do is to push deeper and deeper into better approximations of an ever-evasive reality
  • Errors are inextricably bound up with pursuit of human knowledge, which requires not just mathematical calculation but insight, interpretation and a personal act of judgment for which we are responsible.
  • Dr. Bronowski insisted that the principle of uncertainty was a misnomer, because it gives the impression that in science (and outside of it) we are always uncertain. But this is wrong. Knowledge is precise, but that precision is confined within a certain toleration of uncertainty.
  • The emphasis on the moral responsibility of knowledge was essential for all of Dr. Bronowski’s work. The acquisition of knowledge entails a responsibility for the integrity of what we are as ethical creatures.
  • Pursuing knowledge means accepting uncertainty. Heisenberg’s principle has the consequence that no physical events can ultimately be described with absolute certainty or with “zero tolerance,” as it were. The more we know, the less certain we are.
  • Our relations with others also require a principle of tolerance. We encounter other people across a gray area of negotiation and approximation. Such is the business of listening and the back and forth of conversation and social interaction.
  • For Dr. Bronowski, the moral consequence of knowledge is that we must never judge others on the basis of some absolute, God-like conception of certainty. All knowledge, all information that passes between human beings, can be exchanged only within what we might call “a play of tolerance,” whether in science, literature, politics or religion.
  • The play of tolerance opposes the principle of monstrous certainty that is endemic to fascism and, sadly, not just fascism but all the various faces of fundamentalism. When we think we have certainty, when we aspire to the knowledge of the gods, then Auschwitz can happen and can repeat itself.
  • The pursuit of scientific knowledge is as personal an act as lifting a paintbrush or writing a poem, and they are both profoundly human. If the human condition is defined by limitedness, then this is a glorious fact because it is a moral limitedness rooted in a faith in the power of the imagination, our sense of responsibility and our acceptance of our fallibility. We always have to acknowledge that we might be mistaken.
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • There will be no place to go but the unemployment line.
  • There will be no place to go but the unemployment line.
  • at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
  • ...34 more annotations...
  • The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor marke
  • The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently. In other words, the Luddites weren't wrong. They were just 200 years too early
  • Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
  • Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship," she says, "is already becoming the new normal."
  • robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
  • Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
  • while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true.
  • In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change
  • The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy?
  • if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed.
  • Second, we'd expect to see fewer job openings than in the past.
  • Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten in a race to the bottom.
  • Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less in new products and new factories
  • Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
  • We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
  • The modern economy is complex, and most of these trends have multiple causes.
  • in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.
  • the first jobs to go will be middle-skill jobs. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles
  • in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
  • In fact, there's even a digital sports writer. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too
  • Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
  • Take driverless cars.
  • The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced
  • There will be no place to go but the unemployment lin
  • we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
  • Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them
  • we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it."
  • would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest?
  • Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
  •  economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"
  • In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society.
  • it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
  • When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see
  • A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
Javier E

Macro Manners - NYTimes.com - 0 views

  • Simon Wren-Lewis worries whether he has been too rude toward policy makers who forced a turn toward austerity in 2010, helping to derail recovery in advanced countries.
  • objectively there’s every reason to be very angry: policy makers threw out everything we’ve learned about business cycles these past 80 years in favor of doctrines that made them feel comfortable — and millions of workers paid the price.
  • should we cut them some slack nonetheless?
  • ...2 more annotations...
  • This is basically an operational question; as Mark says, the goal is to change minds — although the big question there is whether you’re trying to change the minds of the policy makers themselves, or the minds of other people, so we can get a new and better set of policy makers.
  • it matters what niche you yourself fill in the intellectual ecology. Insider-type positions, like that of being the senior economist at the IMF, require tact and euphemisms. Outsider positions, like that of being an iconoclastic columnist at the New York Times, require a lot of effort to get peoples’ attention.
Javier E

The World According to Team Walt - NYTimes.com - 0 views

  • “Breaking Bad” implicitly challenges audiences to get down to bedrock and actually justify those norms. Why is it so wrong to kill strangers — often dangerous strangers! — so that your own family can survive and prosper? Why is it wrong to exploit people you don’t see or care about for the sake of those inside your circle? Why is Walter White’s empire-building — carried out with boldness, brilliance and guile — not an achievement to be admired?
  • The allure for Team Walt is not ultimately the pull of nihilism, or the harmless thrill of rooting for a supervillain. It’s the pull of an alternative moral code, neither liberal nor Judeo-Christian, with an internal logic all its own. As James Bowman wrote in The New Atlantis, embracing Walt doesn’t requiring embracing “individual savagery” and a world without moral rules. It just requires a return to “old rules” — to “the tribal, family-oriented society and the honor culture that actually did precede the Enlightenment’s commitment to universal values.”
  • Those rules seem cruel by the lights of both cosmopolitanism and Christianity, but they are not irrational or necessarily false. Their Darwinian logic is clear enough, and where the show takes place — in the shadow of cancer, the shadow of death — the kindlier alternatives can seem softheaded, pointless, naïve.
  • ...1 more annotation...
  • It’s comforting to dismiss Walt’s admirers as sickos, idiots, “bad fans.” But they, too, can be moralists — drawn by their sympathy for Walter White into a worldview that still lies percolating, like one of his reactions, just below the surface of every human heart.
Javier E

History News Network | History Gets Into Bed with Psychology, and It's a Happy Match - 0 views

  • The fact that many of our self-protective delusions are built into the way the brain works is no justification for not trying to override them. Knowing how dissonance works helps us identify our own inclinations to perpetuate errors -- and protect ourselves from those who can’t. Or won’t.Related LinksWhat Historians Can Learn from the Social Sciences and Sciences /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'hnndev'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); Please enable JavaScript to view the comments powered by Disqus. News Breaking News Historians DC Breaking News Historians DC ‘Scottsboro Boys’ pardoned in Alabama ‘November 22, 1963’ U-Boat discovered off the coast of Indonesia Vatican publicly unveils bone fragments said to belong to St. Peter Pictured: the 'real site' of the Hanging Gardens of Babylon Historian: Taiwan can use WWII legacy to improve standing with China 'I Take Long Walks': The Emotional Lives of Holocaust Scholars Chinese historian: Xi Jinping a master of "neo-authoritarianism" History Comes to Life With Tweets From Past Celtic Paths, Illuminated by a Sundial try{for(var lastpass_iter=0; lastpass_iter < document.forms.length; lastpass_iter++){ var lastpass_f = document.forms[lastpass_iter]; if(typeof(lastpass_f.lpsubmitorig2)=="undefined"){ lastpass_f.lpsubmitorig2 = lastpass_f.submit; lastpass_f.submit = function(){ var form=this; var customEvent = document.createEvent("Event"); customEvent.initEvent("lpCustomEvent", true, true); var d = document.getElementById("hiddenlpsubmitdiv"); for(var i = 0; i < document.forms.length; i++){ if(document.forms[i]==form){ d.innerText=i; } } d.dispatchEvent(customEvent); form.lpsubmitorig2(); } } }}catch(e){}
  • at last, history has gotten into bed with psychological science, and it’s a happy match. History gives us the data of, in Barbara Tuchman’s splendid words, our march of folly -- repeated examples of human beings unable and unwilling to learn from mistakes, let alone to admit them. Cognitive science shows us why
  • Our brains, which have allowed us to travel into outer space, have a whole bunch of design flaws, which is why we have so much breathtaking bumbling here on Earth.
  • ...3 more annotations...
  • Of the many built-in biases in human thought, three have perhaps the greatest consequences for our own history and that of nations: the belief that we see things as they really are, rather than as we wish them to be; the belief that we are better, kinder, smarter, and more ethical than average; and the confirmation bias, which sees to it that we notice, remember, and accept information that confirms our beliefs -- and overlook, forget, and discount information that disconfirms our beliefs.
  • The great motivational theory that accommodates all of these biases is cognitive dissonance, developed by Leon Festinger in 1957 and further refined and transformed into a theory of self-justification by his student (and later my coauthor and friend) Elliot Aronson. The need to reduce dissonance is the key mechanism that underlies the reluctance to be wrong, to change our minds, to admit serious mistakes, and to be unwilling to accept unwelcome information
  • The greater the dissonance between who we are and the mistake we made or the cruelty we committed, the greater the need to justify the mistake, the crime, the villainy, instead of admitting and rectifying it
Javier E

Why Are Hundreds of Harvard Students Studying Ancient Chinese Philosophy? - Christine Gross-Loh - The Atlantic - 0 views

  • Puett's course&nbsp;Classical Chinese Ethical and Political Theory&nbsp;has become the third most popular course at the university. The only classes with higher enrollment are Intro to Economics and Intro to Computer Science.
  • the class fulfills one of Harvard's more challenging core requirements, Ethical Reasoning. It's clear, though, that students are also lured in by Puett's bold promise: “This course will change your life.”
  • Puett uses Chinese philosophy as a way to give undergraduates concrete, counter-intuitive, and even revolutionary ideas, which teach them how to live a better life.&nbsp;
  • ...18 more annotations...
  • Puett puts a fresh spin on the questions that Chinese scholars grappled with centuries ago. He requires his students to closely read original texts (in translation) such as Confucius’s Analects, the Mencius, and the Daodejing and then actively put the teachings into practice in their daily lives.&nbsp;His lectures use Chinese thought in the context of contemporary American life to help 18- and 19-year-olds who are struggling to find their place in the world figure out how to be good human beings; how to create a good society; how to have a flourishing life.&nbsp;
  • Puett began offering his course to introduce his students not just to a completely different cultural worldview but also to a different set of tools. He told me he is seeing more students who are “feeling pushed onto a very specific path towards very concrete career goals”
  • Puett tells his students that being calculating and rationally deciding on plans is precisely the wrong way to make any sort of important life decision. The Chinese philosophers they are reading would say that this strategy makes it harder to remain open to other possibilities that don’t fit into that plan.
  • Students who do this “are not paying enough attention to the daily things that actually invigorate and inspire them, out of which could come a really fulfilling, exciting life,” he explains. If what excites a student is not the same as what he has decided is best for him, he becomes trapped on a misguided path, slated to begin an unfulfilling career.
  • He teaches them that:&nbsp;&nbsp; The smallest actions have the most profound ramifications.&nbsp;
  • From a Chinese philosophical point of view, these small daily experiences provide us endless opportunities to understand ourselves. When we notice and understand what makes us tick, react, feel joyful or angry, we develop a better sense of who we are that helps us when approaching new situations. Mencius, a late Confucian thinker (4th century B.C.E.), taught that if you cultivate your better nature in these small ways, you can become an extraordinary person with an incredible influence
  • Decisions are made from the heart.&nbsp;Americans tend to believe that humans are rational creatures who make decisions logically, using our brains. But in Chinese, the word for “mind” and “heart” are the same.
  • Whenever we make decisions, from the prosaic to the profound (what to make for dinner; which courses to take next semester; what career path to follow; whom to marry), we will make better ones when we intuit how to integrate heart and mind and let our rational and emotional sides blend into one.&nbsp;
  • In the same way that one deliberately practices the piano in order to eventually play it effortlessly, through our everyday activities we train ourselves to become more open to experiences and phenomena so that eventually the right responses and decisions come spontaneously, without angst, from the heart-mind.
  • If the body leads, the mind will follow.&nbsp;Behaving kindly (even when you are not feeling kindly), or smiling at someone (even if you aren’t feeling particularly friendly at the moment) can cause actual differences in how you end up feeling and behaving, even ultimately changing the outcome of a situation.
  • Aristotle said, “We are what we repeatedly do,” a view shared by thinkers such as Confucius, who taught that the importance of rituals lies in how they inculcate a certain sensibility in a person.
  • “The Chinese philosophers we read taught that the way to really change lives for the better is from a very mundane level, changing the way people experience and respond to the world, so what I try to do is to hit them at that level. I’m not trying to give my students really big advice about what to do with their lives. I just want to give them a sense of what they can do daily to transform how they live.”
  • Their assignments are small ones: to first observe how they feel when they smile at a stranger, hold&nbsp;open&nbsp;a door for someone, engage in a hobby. He asks them to take note of what happens next: how every action, gesture, or word dramatically affects how others respond to them.&nbsp;Then Puett asks them to pursue more of the activities that they notice arouse positive, excited feelings.
  • Once they’ve understood themselves better and discovered what they love to do they can then work to become adept at those activities through ample practice and self-cultivation. Self-cultivation is related to another classical Chinese concept: that effort is what counts the most, more than talent or aptitude. We aren’t limited to our innate talents; we all have enormous potential to expand our abilities if we cultivate them
  • To be interconnected, focus on mundane, everyday practices, and understand that great things begin with the very smallest of acts are radical ideas for young people living in a society that pressures them to think big and achieve individual excellence.
  • One of Puett’s former students, Adam Mitchell, was a math and science whiz who went to Harvard intending to major in economics. At Harvard specifically and in society in general, he told me, “we’re expected to think of our future in this rational way:&nbsp;to add up the pros and cons and then make a decision.&nbsp;That leads you down the road of ‘Stick with what you’re good at’”—a road with little risk but little reward.
  • after his introduction to Chinese philosophy during his sophomore year, he realized this wasn’t the only way to think about the future. Instead, he tried courses he was drawn to but wasn’t naturally adroit at because he had learned how much value lies in working hard to become better at what you love.&nbsp;He became more aware of the way he was affected by those around him, and how they were affected by his own actions in turn. Mitchell threw himself into foreign language learning, feels his relationships have deepened, and is today working towards a master’s degree in regional studies.
  • “I can happily say that Professor Puett lived up to his promise, that the course did in fact change my life.”
Javier E

Science: A New Map of the Human Brain - WSJ.com - 0 views

  • The popular left/right story has no solid basis in science. The brain doesn't work one part at a time, but rather as a single interactive system, with all parts contributing in concert, as neuroscientists have long known. The left brain/right brain story may be the mother of all urban legends: It sounds good and seems to make sense—but just isn't true.
  • There is a better way to understand the functioning of the brain, based on another, ordinarily overlooked anatomical division—between its top and bottom parts. We call this approach "the theory of cognitive modes." Built on decades of unimpeachable research that has largely remained inside scientific circles, it offers a new way of viewing thought and behavior
  • Our theory has emerged from the field of neuropsychology, the study of higher cognitive functioning—thoughts, wishes, hopes, desires and all other aspects of mental life. Higher cognitive functioning is seated in the cerebral cortex, the rind-like outer layer of the brain that consists of four lobes
  • ...19 more annotations...
  • The top brain comprises the entire parietal lobe and the top (and larger) portion of the frontal lobe. The bottom comprises the smaller remainder of the frontal lobe and all of the occipital and temporal lobes.
  • research reveals that the top-brain system uses information about the surrounding environment (in combination with other sorts of information, such as emotional reactions and the need for food or drink) to figure out which goals to try to achieve. It actively formulates plans, generates expectations about what should happen when a plan is executed and then, as the plan is being carried out, compares what is happening with what was expected, adjusting the plan accordingly.
  • The bottom-brain system organizes signals from the senses, simultaneously comparing what is being perceived with all the information previously stored in memory. It then uses the results of such comparisons to classify and interpret the object or event, allowing us to confer meaning on the world.
  • The top- and bottom-brain systems always work together, just as the hemispheres always do. Our brains are not engaged in some sort of constant cerebral tug of war
  • Although the top and bottom parts of the brain are always used during all of our waking lives, people do not rely on them to an equal degree. To extend the bicycle analogy, not everyone rides a bike the same way. Some may meander, others may race.
  • You can use the top-brain system to develop simple and straightforward plans, as required by a situation—or you have the option to use it to develop detailed and complex plans (which are not imposed by a situation).
  • Our theory predicts that people fit into one of four groups, based on their typical use of the two brain systems. Depending on the degree to which a person uses the top and bottom systems in optional ways, he or she will operate in one of four cognitive modes: Mover, Perceiver, Stimulator and Adaptor.
  • Mover mode results when the top- and bottom-brain systems are both highly utilized in optional ways. Oprah Winfrey
  • According to the theory, people who habitually rely on Mover mode are most comfortable in positions that allow them to plan, act and see the consequences of their actions. They are well suited to being leaders.
  • Perceiver mode results when the bottom-brain system is highly utilized in optional ways but the top is not. Think of the Dalai Lama or Emily Dickinson
  • People who habitually rely on Perceiver mode try to make sense in depth of what they perceive; they interpret their experiences, place them in context and try to understand the implications.
  • such people—including naturalists, pastors, novelists—typically lead lives away from the limelight. Those who rely on this mode often play a crucial role in a group; they can make sense of events and provide a bigger picture
  • Stimulator mode, which results when the top-brain system is highly utilized but the bottom is not. According to our theory, people who interact with the world in Stimulator mode often create and execute complex and detailed plans (using the top-brain system) but fail to register consistently and accurately the consequences of acting on those plans
  • they may not always note when enough is enough. Their actions can be disruptive, and they may not adjust their behavior appropriately.
  • Examples of people who illustrate Stimulator mode would include Tiger Woods
  • Adaptor mode, which results when neither the top- nor the bottom-brain system is highly utilized in optional ways. People who think in this mode are not caught up in initiating plans, nor are they fully focused on classifying and interpreting what they experience. Instead, they become absorbed by local events and the immediate requirements of the situation
  • They are responsive and action-oriented and tend to "go with the flow." Others see them as free-spirited and fun to be with.
  • those who typically operate in Adaptor mode can be valuable team members. In business, they often form the backbone of an organization, carrying out essential operations.
  • No one mode is "better" than the others. Each is more or less useful in different circumstances, and each contributes something useful to a team. Our theory leads us to expect that you can work with others most productively when you are aware not just of the strengths and weakness of their preferred modes but also of the strengths and weakness of your own preferred mode
Javier E

Never Forgetting a Face - NYTimes.com - 1 views

  • Face-matching today could enable mass surveillance, “basically robbing everyone of their anonymity,” he says, and inhibit people’s normal behavior outside their homes.
  • Dr. Atick says the technology he helped cultivate requires some special safeguards. Unlike fingerprinting or other biometric techniques, face recognition can be used at a distance, without people’s awareness; it could then link their faces and identities to the many pictures they have put online. But in the United States, no specific federal law governs face recognition.
  • Dr. Atick has been working behind the scenes to influence the outcome. He is part of a tradition of scientists who have come to feel responsible for what their work has wrought.
  • ...7 more annotations...
  • Facebook researchers recently reported how the company had developed a powerful pattern-recognition system, called DeepFace, which had achieved near-human accuracy in identifying people’s faces.
  • To work, the technology needs a large data set, called an image gallery, containing the photographs or video stills of faces already identified by name. Software automatically converts the topography of each face in the gallery into a unique mathematical code, called a faceprint. Once people are faceprinted, they may be identified in existing or subsequent photographs or as they walk in front of a video camera.
  • some casinos faceprint visitors, seeking to identify repeat big-spending customers for special treatment. In Japan, a few grocery stores use face-matching to classify some shoppers as shoplifters or even “complainers” and blacklist them.
  • Is faceprinting as innocuous as photography, an activity that people may freely perform? Or is a faceprint a unique indicator, like a fingerprint or a DNA sequence, that should require a person’s active consent before it can be collected, matched, shared or sold?
  • A private high school in Los Angeles also has an FST system. The school uses the technology to recognize students when they arrive — a security measure intended to keep out unwanted interlopers. But it also serves to keep the students in line.“If a girl will come to school at 8:05, the door will not open and she will be registered as late,” Mr. Farkash explained. “So you can use the system not only for security but for education, for better discipline.”
  • As with many emerging technologies, the arguments tend to coalesce around two predictable poles: those who think the technology needs rules and regulation to prevent violations of civil liberties and those who fear that regulation would stifle innovation. But face recognition stands out among such technologies: While people can disable smartphone geolocation and other tracking techniques, they can’t turn off their faces.
  • To maintain the status quo around public anonymity, he says, companies should take a number of steps: They should post public notices where they use face recognition; seek permission from a consumer before collecting a faceprint with a unique, repeatable identifier like a name or code number; and use faceprints only for the specific purpose for which they have received permission. Those steps, he says, would inhibit sites, stores, apps and appliances from covertly linking a person in the real world with their multiple online personas.
Javier E

Beyond Energy, Matter, Time and Space - NYTimes.com - 0 views

  • New particles may yet be discovered, and even new laws. But it is almost taken for granted that everything from physics to biology, including the mind, ultimately comes down to four fundamental concepts: matter and energy interacting in an arena of space and time.
  • What makes “Mind and Cosmos” worth reading is that Dr. Nagel is an atheist, who rejects the creationist idea of an intelligent designer. The answers, he believes, may still be found through science, but only by expanding it further than it may be willing to go.
  • “Humans are addicted to the hope for a final reckoning,” he wrote, “but intellectual humility requires that we resist the temptation to assume that the tools of the kind we now have are in principle sufficient to understand the universe as a whole.”
  • ...4 more annotations...
  • Dr. Tegmark, in his new book, “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,” turns the idea on its head: The reason mathematics serves as such a forceful tool is that the universe is a mathematical structure. Going beyond Pythagoras and Plato, he sets out to show how matter, energy, space and time might emerge from n
  • “Above all,” he wrote, “I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world.”
  • Neuroscientists assume that these mental powers somehow emerge from the electrical signaling of neurons — the circuitry of the brain. But no one has come close to explaining how that occurs. Continue reading the main story Continue reading the main story That, Dr. Nagel proposes, might require another revolution: showing that mind, along with matter and energy, is “a fundamental principle of nature” — and that we live in a universe primed “to generate beings capable of comprehending it.” Rather than being a blind series of random mutations and adaptations, evolution would have a direction, maybe even a purpose.
  • the mathematician Edward Frenkel noted that only a small part of the vast ocean of mathematics appears to describe the real world. The rest seems to b
Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas Carr - The Atlantic - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
« First ‹ Previous 61 - 80 of 583 Next › Last »
Showing 20 items per page