Skip to main content

Home/ TOK Friends/ Group items tagged Set

Rss Feed Group items tagged

7More

Online and Scared - The New York Times - 0 views

  • That is to say, a critical mass of our interactions had moved to a realm where we’re all connected but no one’s in charge.
  • And, I would argue, 2016 will be remembered as the year when we fully grasped just how scary that can be — how easy it was for a presidential candidate to tweet out untruths and half-truths faster than anyone could correct them, how cheap it was for Russia to intervene on Trump’s behalf with hacks of Democratic operatives’ computers and how unnerving it was to hear Yahoo’s chief information security officer, Bob Lord, say that his company still had “not been able to identify” how one billion Yahoo accounts and their sensitive user information were hacked in 2013.
  • Facebook — which wants all the readers and advertisers of the mainstream media but not to be saddled with its human editors and fact-checkers — is now taking more seriously its responsibilities as a news purveyor in cyberspace.
  • ...3 more annotations...
  • And that begins with teaching them that the internet is an open sewer of untreated, unfiltered information, where they need to bring skepticism and critical thinking to everything they read and basic civic decency to everything they write.
  • One assessment required middle schoolers to explain why they might not trust an article on financial planning that was written by a bank executive and sponsored by a bank.
  • Many people assume that because young people are fluent in social media they are equally perceptive about what they find there. Our work shows the opposite to be true.
  •  
    Internet has always been a big issue since more and more people, especially teenager spend most of their time on. Internet as a social media deliver information faster and wider than any other traditional media. The mode of information spreading is more of that the internet reveals issue and the traditional media such as television follows up and provide more detailed information. However, as internet develops, we also need to develop some rules and restrictions. We underestimate how dangerous internet can be if it is weaponized. However, there is a dilemma. Since internet is popular because of the unlimited freedom people feel online, as the police and authority gets involved in, people would ultimately lose that freedom. The censorship in China is a good example to see how people will respond to setting rules to the internet. There should some sort of balance that we can strive for in the future. --Sissi (1/11/2017)
42More

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
15More

The Failure of Rational Choice Philosophy - NYTimes.com - 1 views

  • According to Hegel, history is idea-driven.
  • Ideas for him are public, rather than in our heads, and serve to coordinate behavior. They are, in short, pragmatically meaningful words.  To say that history is “idea driven” is to say that, like all cooperation, nation building requires a common basic vocabulary.
  • One prominent component of America’s basic vocabulary is ”individualism.”
  • ...12 more annotations...
  • individualism, the desire to control one’s own life, has many variants. Tocqueville viewed it as selfishness and suspected it, while Emerson and Whitman viewed it as the moment-by-moment expression of one’s unique self and loved it.
  • individualism as the making of choices so as to maximize one’s preferences. This differed from “selfish individualism” in that the preferences were not specified: they could be altruistic as well as selfish. It differed from “expressive individualism” in having general algorithms by which choices were made. These made it rational.
  • it was born in 1951 as “rational choice theory.” Rational choice theory’s mathematical account of individual choice, originally formulated in terms of voting behavior, made it a point-for-point antidote to the collectivist dialectics of Marxism
  • Functionaries at RAND quickly expanded the theory from a tool of social analysis into a set of universal doctrines that we may call “rational choice philosophy.” Governmental seminars and fellowships spread it to universities across the country, aided by the fact that any alternative to it would by definition be collectivist.
  • rational choice philosophy moved smoothly on the backs of their pupils into the “real world” of business and governme
  • Today, governments and businesses across the globe simply assume that social reality  is merely a set of individuals freely making rational choices.
  • At home, anti-regulation policies are crafted to appeal to the view that government must in no way interfere with Americans’ freedom of choice.
  • But the real significance of rational choice philosophy lay in ethics. Rational choice theory, being a branch of economics, does not question people’s preferences; it simply studies how they seek to maximize them. Rational choice philosophy seems to maintain this ethical neutrality (see Hans Reichenbach’s 1951 “The Rise of Scientific Philosophy,” an unwitting masterpiece of the genre); but it does not.
  • Whatever my preferences are, I have a better chance of realizing them if I possess wealth and power. Rational choice philosophy thus promulgates a clear and compelling moral imperative: increase your wealth and power!
  • Today, institutions which help individuals do that (corporations, lobbyists) are flourishing; the others (public hospitals, schools) are basically left to rot. Business and law schools prosper; philosophy departments are threatened with closure.
  • Hegel, for one, had denied all three of its central claims in his “Encyclopedia of the Philosophical Sciences” over a century before. In that work, as elsewhere in his writings, nature is not neatly causal, but shot through with randomness. Because of this chaos, we cannot know the significance of what we have done until our community tells us; and ethical life correspondingly consists, not in pursuing wealth and power, but in integrating ourselves into the right kinds of community.
  • By 1953, W. V. O. Quine was exposing the flaws in rational choice epistemology. John Rawls, somewhat later, took on its sham ethical neutrality, arguing that rationality in choice includes moral constraints. The neat causality of rational choice ontology, always at odds with quantum physics, was further jumbled by the environmental crisis, exposed by Rachel Carson’s 1962 book “The Silent Spring,” which revealed that the causal effects of human actions were much more complex, and so less predicable, than previously thought.
3More

To Make the World Better, Think Small - The New York Times - 0 views

  • There is a solution, however, to psychic numbing: Think small. In the fund-raising business, there’s an old axiom that “one is greater than one million.” This isn’t bad math; it is a reminder that when it comes to people in need, one million is a statistic, while one is a human story.
  • As we head into 2017, do you want a solution better than “Screw ’em”? Maybe your problem is that you are thinking too big. This year, start with one, not one million. It might just be a happy new year after all.
  •  
    I have once read about a story of a marathon runner. He overcome all the tiresome and pain by setting small goal throughout the long course. He would observe the course beforehand and memorize some significant symbols on the course. By having those small goals, it can release some mental pressure and make the course more approachable. I think this an example of deceiving our brain for good. --Sissi (12/31/2016)
20More

In This Snapchat Campaign, Election News Is Big and Then It's Gone - The New York Times - 1 views

  • Every modern presidential election is at least in part defined by the cool new media breakthrough of its moment.
  • In 2000, there was email, and by golly was that a big change from the fax. The campaigns could get their messages in front of print and cable news reporters — who could still dominate the campaign narrative — at will,
  • Then 2008: Facebook made it that much easier for campaigns to reach millions of people directly,
  • ...17 more annotations...
  • The 2004 campaign was the year of the “Web log,” or blog, when mainstream reporters and campaigns officially began losing any control they may have had over political new
  • Marco Rubio’s campaign marched into the election season ready to fight the usual news-cycle-by-news-cycle skirmishes. It was surprised to learn that, lo and behold, “There was no news cycle — everything was one big fire hose,” Alex Conant, a senior Rubio strategist, told me. “News was constantly breaking and at the end of the day hardly anything mattered. Things would happen; 24 hours later, everyone was talking about something else.”
  • Snapchat represents a change to something else: the longevity of news, how durably it keeps in our brain cells and our servers.
  • Snapchat is recording the here and the now, playing for today. Tomorrow will bring something new that renders today obsolete. It’s a digital Tibetan sand painting made in the image of the millennial mind.
  • Snapchat executives say they set up the app this way because this is what their tens of millions of younger users want; it’s how they live.
  • They can’t possibly have enough bandwidth to process all the incoming information and still dwell on what already was, can they?
  • Experienced strategists and their candidates, who could always work through their election plans methodically — promoting their candidacies one foot in front of the other, adjusting here and there for the unexpected — suddenly found that they couldn’t operate the way they always did.
  • The question this year has been whether 2016 will be the “Snapchat election,
  • Then there was Jeb Bush, expecting to press ahead by presenting what he saw as leading-edge policy proposals that would set off a prolonged back-and-forth. When Mr. Bush rolled out a fairly sweeping plan to upend the college loan system, the poor guy thought this was going to become a big thing.
  • It drew only modest coverage and was quickly buried by the latest bit from Donald Trump.
  • In this “hit refresh” political culture, damaging news does not have to stick around for long, either. The next development, good or bad, replaces it almost immediately.
  • Mr. Miller pointed to a recent episode in which Mr. Trump said a protester at a rally had “ties to ISIS,” after that protester charged the stage. No such ties existed. “He says ‘ISIS is attacking me’; this was debunked in eight minutes by Twitter,” Mr. Miller said. “Cable talked about it for three hours and it went away.”
  • “Hillary Clinton said that she was under sniper fire in Bosnia” — she wasn’t — “and that has stuck with her for 20 years,”
  • Mr. Trump has mastered this era of short attention spans in politics by realizing that if you’re the one regularly feeding the stream, you can forever move past your latest trouble, and hasten the mass amnesia.
  • It was with this in mind that The Washington Post ran an editorial late last week reminding its readers of some of Mr. Trump’s more outlandish statements and policy positions
  • The Post urged its readers to “remember” more than two dozen items from Mr. Trump’s record, including that he promised “to round up 11 million undocumented immigrants and deport them,” and “lied about President Obama’s birth certificate.”
  • as the media habits of the young drive everybody else’s, I’m reminded of that old saw about those who forget history. Now, what was I saying?
26More

How Did Consciousness Evolve? - The Atlantic - 0 views

  • Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?
  • The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions.
  • The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence
  • ...23 more annotations...
  • Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition.
  • It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.
  • Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life
  • The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum
  • At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.
  • All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates
  • According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.
  • The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning.
  • The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement
  • In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
  • With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst. Birds inherited a wulst from their reptile ancestors. Mammals did too, but our version is usually called the cerebral cortex and has expanded enormously
  • The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.
  • The most important difference between the cortex and the tectum may be the kind of attention they control. The tectum is the master of overt attention—pointing the sensory apparatus toward anything important. The cortex ups the ante with something called covert attention. You don’t need to look directly at something to covertly attend to it. Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it
  • The cortex needs to control that virtual movement, and therefore like any efficient controller it needs an internal model. Unlike the tectum, which models concrete objects like the eyes and the head, the cortex must model something much more abstract. According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are
  • Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence
  • this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description.
  • I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.
  • In the AST’s evolutionary story, social cognition begins to ramp up shortly after the reptilian wulst evolved. Crocodiles may not be the most socially complex creatures on earth, but they live in large communities, care for their young, and can make loyal if somewhat dangerous pets.
  • If AST is correct, 300 million years of reptilian, avian, and mammalian evolution have allowed the self-model and the social model to evolve in tandem, each influencing the other. We understand other people by projecting ourselves onto them. But we also understand ourselves by considering the way other people might see us.
  • t the cortical networks in the human brain that allow us to attribute consciousness to others overlap extensively with the networks that construct our own sense of consciousness.
  • Language is perhaps the most recent big leap in the evolution of consciousness. Nobody knows when human language first evolved. Certainly we had it by 70 thousand years ago when people began to disperse around the world, since all dispersed groups have a sophisticated language. The relationship between language and consciousness is often debated, but we can be sure of at least this much: once we developed language, we could talk about consciousness and compare notes
  • Maybe partly because of language and culture, humans have a hair-trigger tendency to attribute consciousness to everything around us. We attribute consciousness to characters in a story, puppets and dolls, storms, rivers, empty spaces, ghosts and gods. Justin Barrett called it the Hyperactive Agency Detection Device, or HADD
  • the HADD goes way beyond detecting predators. It’s a consequence of our hyper-social nature. Evolution turned up the amplitude on our tendency to model others and now we’re supremely attuned to each other’s mind states. It gives us our adaptive edge. The inevitable side effect is the detection of false positives, or ghosts.
12More

Quitters Never Win: The Costs of Leaving Social Media - Woodrow Hartzog and Evan Seling... - 2 views

  • Manjoo offers this security-centric path for folks who are anxious about the service being "one the most intrusive technologies ever built," and believe that "the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club". Bottom line: stop tuning in and start dropping out if you suspect that the culture of oversharing, digital narcissism, and, above all, big-data-hungry, corporate profiteering will trump privacy settings.
  • Angwin plans on keeping a bare-bones profile. She'll maintain just enough presence to send private messages, review tagged photos, and be easy for readers to find. Others might try similar experiments, perhaps keeping friends, but reducing their communication to banal and innocuous expressions. But, would such disclosures be compelling or sincere enough to retain the technology's utility?
  • The other unattractive option is for social web users to willingly pay for connectivity with extreme publicity.
  • ...9 more annotations...
  • go this route if you believe privacy is dead, but find social networking too good to miss out on.
  • While we should be attuned to constraints and their consequences, there are at least four problems with conceptualizing the social media user's dilemma as a version of "if you can't stand the heat, get out of the kitchen".
  • The efficacy of abandoning social media can be questioned when others are free to share information about you on a platform long after you've left.
  • Second, while abandoning a single social technology might seem easy, this "love it or leave it" strategy -- which demands extreme caution and foresight from users and punishes them for their naivete -- isn't sustainable without great cost in the aggregate. If we look past the consequences of opting out of a specific service (like Facebook), we find a disconcerting and more far-reaching possibility: behavior that justifies a never-ending strategy of abandoning every social technology that threatens privacy -- a can being kicked down the road in perpetuity without us resolving the hard question of whether a satisfying balance between protection and publicity can be found online
  • if your current social network has no obligation to respect the obscurity of your information, what justifies believing other companies will continue to be trustworthy over time?
  • Sticking with the opt-out procedure turns digital life into a paranoid game of whack-a-mole where the goal is to stay ahead of the crushing mallet. Unfortunately, this path of perilously transferring risk from one medium to another is the direction we're headed if social media users can't make reasonable decisions based on the current context of obscurity, but instead are asked to assume all online social interaction can or will eventually lose its obscurity protection.
  • The fourth problem with the "leave if you're unhappy" ethos is that it is overly individualistic. If a critical mass participates in the "Opt-Out Revolution," what would happen to the struggling, the lonely, the curious, the caring, and the collaborative if the social web went dark?
  • Our point is that there is a middle ground between reclusion and widespread publicity, and the reduction of user options to quitting or coping, which are both problematic, need not be inevitable, especially when we can continue exploring ways to alleviate the user burden of retreat and the societal cost of a dark social web.
  • it is easy to presume that "even if you unfriend everybody on Facebook, and you never join Twitter, and you don't have a LinkedIn profile or an About.me page or much else in the way of online presence, you're still going to end up being mapped and charted and slotted in to your rightful place in the global social network that is life." But so long it remains possible to create obscurity through privacy enhancing technology, effective regulation, contextually appropriate privacy settings, circumspect behavior, and a clear understanding of how our data can be accessed and processed, that fatalism isn't justified.
17More

Faith in science and religion: Truth, authority, and the orderliness of nature. - 0 views

  • A common tactic of those who claim that science and religion are compatible is to argue that science, like religion, rests on faith: faith in the accuracy of what we observe, in the laws of nature, or in the value of reason
  • Such statements imply that science and religion are not that different because both seek the truth and use faith to find it
  • science is often described as a kind of religion.
  • ...14 more annotations...
  • Indeed, there is no evidence beyond revelation, authority, and scripture to support the religious claims above, and most of the world’s believers would reject at least one of them
  • faith involves pretending to know things you don’t
  • faith doesn’t mean “belief without good evidence,” but “confidence derived from scientific tests and repeated, documented experience.”
  • You have faith (i.e., confidence) that the sun will rise tomorrow because it always has, and there’s no evidence that the Earth has stopped rotating or the sun has burnt out.
  • We know no more now about the divine than we did 1,000 years ago.
  • The conflation of faith as “unevidenced belief” with faith as “justified confidence” is simply a word trick used to buttress religion.
  • The constant scrutiny of our peers ensures that science is largely self-correcting, so that we really can approach the truth about our universe
  • There is strong evidence for the Higgs boson, whose existence was confirmed last year by two independent teams using a giant accelerator and rigorous statistical analysis. But there isn’t, and never will be, any evidence for that sea of milk.
  • Two objects of scientific faith are said to be physical laws and reason. Doing science, it is said, requires unevidenced faith in the “orderliness of nature” and an “unexplained set of physical laws,” as well as in the value of reason in determining truth. Both claims are wrong.
  • The orderliness of nature—the set of so-called natural laws—is not an assumption but an observation
  • We take nature as we find it, and sometimes it behaves predictably.
  • Reason—the habit of being critical, logical, and of learning from experience—is not an a priori assumption but a tool that’s been shown to work
  • Finally, isn’t science at least based on the faith that it’s good to know the truth? Hardly.
  • So the next time you hear someone described as a “person of faith,” remember that although it’s meant as praise, it’s really an insult.
9More

How Stanford Took On the Giants of Economics - The New York Times - 1 views

  • it is a reflection of a broader shift in the study of economics, in which the most cutting-edge work increasingly relies less on a big-brained individual scholar developing mathematical theories, and more on the ability to crunch extensive sets of data to glean insights about topics as varied as how incomes differ across society and how industries organize themselves.
  • “Who wouldn’t want to be where the future of the world is being made?” said Tyler Cowen, an economist at George Mason University (and regular contributor to The New York Times) who often blogs about trends in academic economics. Stanford’s economics department, he said, “has an excitement about it which Boston and Cambridge can’t touch.”
  • In economics, Stanford has frequently been ranked just behind Harvard, M.I.T., Princeton and the University of Chicago, including in the most recent U.S. News & World Report survey
  • ...6 more annotations...
  • In the last four years, Stanford has increased the number of senior faculty by 25 percent, and 11 scholars with millions in cumulative salary have either been recruited from other top programs or resisted poaching attempts by those programs.
  • The specialties of the new recruits vary, but they are all examples of how the momentum in economics has shifted away from theoretical modeling and toward “empirical microeconomics,” the analysis of how things work in the real world, often arranging complex experiments or exploiting large sets of data. That kind of work requires lots of research assistants, work across disciplines including fields like sociology and computer science, and the use of advanced computational techniques unavailable a generation ago.
  • the scholars who have newly signed on with Stanford described a university particularly well suited to research in that vein, with a combination of lab space, strong budgets for research support and proximity to engineering talent.
  • The Chicago School, under the intellectual imprint of Milton Friedman, was a leader in neoclassical thought that emphasizes the efficiency of markets and the risks of government intervention. M.I.T.’s economics department has a long record of economic thought in the Keynesian tradition, and it produced several of the top policy makers who have guided the world economy through the tumultuous last several years.
  • “There isn’t a Stanford school of thought,” said B. Douglas Bernheim, chairman of the university’s economics department. “This isn’t a doctrinaire place. Generally doctrine involves simplification, and increasingly we recognize that these social issues we’re trying to figure out are phenomenally complicated. The consensus at Stanford has focused around the idea that you have to be open to a lot of approaches and ways of thinking about things, and to be rigorous, thorough and careful in bringing the highest standard of craft to bear on your research.”
  • “My sense is this is a good development for economics,” Mr. Chetty said. “I think Stanford is going to be another great department at the level of Harvard and M.I.T. doing this type of work, which is an example of economics becoming a deeper field. It’s a great thing for all the universities — I don’t think it’s a zero-sum game.”
4More

Science in the Age of Alternative Facts | Big Think - 0 views

  • discovered a peculiar aspect of human psychology and physiology: the placebo effect. As biographer Richard Holmes writes regarding their increased health, “It was simply because the patients believed they would be cured.”
  • Most importantly he did not finagle results to fit his preconceived notion of what this and other gases accomplish.
  • For science to work we need to move out of the way of ourselves and observe the data. Right now too many emotionally stunted and corporate-backed obstacles stand in the way of that.
  •  
    Alternative facts are spooky things that confuse us between what we think is happening and what's really taking place. In this article, the author uses the example of Davy to suggest that treating the data objectively is what science should be doing. Data is tricky in science because we can draw different conclusions from the same set of data. Just like the line drawing game we played in TOK, there are infinite numbers of lines we can draw to connect all the data point, but there is only one that would be true. As the author shown in this article, the best way to avoid creating alternative facts is to leave out our emotion and personal opinion and let the data speak. Although intuition and imagination is good for science, but for most of the times, we need to remind ourselves not to force the data.
23More

Why It's OK to Let Apps Make You a Better Person - Evan Selinger - Technology - The Atl... - 0 views

  • one theme emerges from the media coverage of people's relationships with our current set of technologies: Consumers want digital willpower. App designers in touch with the latest trends in behavioral modification--nudging, the quantified self, and gamification--and good old-fashioned financial incentive manipulation, are tackling weakness of will. They're harnessing the power of payouts, cognitive biases, social networking, and biofeedback. The quantified self becomes the programmable self.
  • the trend still has multiple interesting dimensions
  • Individuals are turning ever more aspects of their lives into managerial problems that require technological solutions. We have access to an ever-increasing array of free and inexpensive technologies that harness incredible computational power that effectively allows us to self-police behavior everywhere we go. As pervasiveness expands, so does trust.
  • ...20 more annotations...
  • Some embrace networked, data-driven lives and are comfortable volunteering embarrassing, real time information about what we're doing, whom we're doing it with, and how we feel about our monitored activities.
  • Put it all together and we can see that our conception of what it means to be human has become "design space." We're now Humanity 2.0, primed for optimization through commercial upgrades. And today's apps are more harbinger than endpoint.
  • philosophers have had much to say about the enticing and seemingly inevitable dispersion of technological mental prosthetic that promise to substitute or enhance some of our motivational powers.
  • beyond the practical issues lie a constellation of central ethical concerns.
  • they should cause us to pause as we think about a possible future that significantly increases the scale and effectiveness of willpower-enhancing apps. Let's call this hypothetical future Digital Willpower World and characterize the ethical traps we're about to discuss as potential general pitfalls
  • it is antithetical to the ideal of " resolute choice." Some may find the norm overly perfectionist, Spartan, or puritanical. However, it is not uncommon for folks to defend the idea that mature adults should strive to develop internal willpower strong enough to avoid external temptations, whatever they are, and wherever they are encountered.
  • In part, resolute choosing is prized out of concern for consistency, as some worry that lapse of willpower in any context indicates a generally weak character.
  • Fragmented selves behave one way while under the influence of digital willpower, but another when making decisions without such assistance. In these instances, inconsistent preferences are exhibited and we risk underestimating the extent of our technological dependency.
  • It simply means that when it comes to digital willpower, we should be on our guard to avoid confusing situational with integrated behaviors.
  • the problem of inauthenticity, a staple of the neuroethics debates, might arise. People might start asking themselves: Has the problem of fragmentation gone away only because devices are choreographing our behavior so powerfully that we are no longer in touch with our so-called real selves -- the selves who used to exist before Digital Willpower World was formed?
  • Infantalized subjects are morally lazy, quick to have others take responsibility for their welfare. They do not view the capacity to assume personal responsibility for selecting means and ends as a fundamental life goal that validates the effort required to remain committed to the ongoing project of maintaining willpower and self-control.
  • Michael Sandel's Atlantic essay, "The Case Against Perfection." He notes that technological enhancement can diminish people's sense of achievement when their accomplishments become attributable to human-technology systems and not an individual's use of human agency.
  • Borgmann worries that this environment, which habituates us to be on auto-pilot and delegate deliberation, threatens to harm the powers of reason, the most central component of willpower (according to the rationalist tradition).
  • In several books, including Technology and the Character of Contemporary Life, he expresses concern about technologies that seem to enhance willpower but only do so through distraction. Borgmann's paradigmatic example of the non-distracted, focally centered person is a serious runner. This person finds the practice of running maximally fulfilling, replete with the rewarding "flow" that can only comes when mind/body and means/ends are unified, while skill gets pushed to the limit.
  • Perhaps the very conception of a resolute self was flawed. What if, as psychologist Roy Baumeister suggests, willpower is more "staple of folk psychology" than real way of thinking about our brain processes?
  • novel approaches suggest the will is a flexible mesh of different capacities and cognitive mechanisms that can expand and contract, depending on the agent's particular setting and needs. Contrary to the traditional view that identifies the unified and cognitively transparent self as the source of willed actions, the new picture embraces a rather diffused, extended, and opaque self who is often guided by irrational trains of thought. What actually keeps the self and its will together are the given boundaries offered by biology, a coherent self narrative created by shared memories and experiences, and society. If this view of the will as an expa
  • nding and contracting system with porous and dynamic boundaries is correct, then it might seem that the new motivating technologies and devices can only increase our reach and further empower our willing selves.
  • "It's a mistake to think of the will as some interior faculty that belongs to an individual--the thing that pushes the motor control processes that cause my action," Gallagher says. "Rather, the will is both embodied and embedded: social and physical environment enhance or impoverish our ability to decide and carry out our intentions; often our intentions themselves are shaped by social and physical aspects of the environment."
  • It makes perfect sense to think of the will as something that can be supported or assisted by technology. Technologies, like environments and institutions can facilitate action or block it. Imagine I have the inclination to go to a concert. If I can get my ticket by pressing some buttons on my iPhone, I find myself going to the concert. If I have to fill out an application form and carry it to a location several miles away and wait in line to pick up my ticket, then forget it.
  • Perhaps the best way forward is to put a digital spin on the Socratic dictum of knowing myself and submit to the new freedom: the freedom of consuming digital willpower to guide me past the sirens.
11More

Atul Gawande: Failure and Rescue : The New Yorker - 0 views

  • the critical skills of the best surgeons I saw involved the ability to handle complexity and uncertainty. They had developed judgment, mastery of teamwork, and willingness to accept responsibility for the consequences of their choices. In this respect, I realized, surgery turns out to be no different than a life in teaching, public service, business, or almost anything you may decide to pursue. We all face complexity and uncertainty no matter where our path takes us. That means we all face the risk of failure. So along the way, we all are forced to develop these critical capacities—of judgment, teamwork, and acceptance of responsibility.
  • people admonish us: take risks; be willing to fail. But this has always puzzled me. Do you want a surgeon whose motto is “I like taking risks”? We do in fact want people to take risks, to strive for difficult goals even when the possibility of failure looms. Progress cannot happen otherwise. But how they do it is what seems to matter. The key to reducing death after surgery was the introduction of ways to reduce the risk of things going wrong—through specialization, better planning, and technology.
  • there continue to be huge differences between hospitals in the outcomes of their care. Some places still have far higher death rates than others. And an interesting line of research has opened up asking why.
  • ...8 more annotations...
  • I thought that the best places simply did a better job at controlling and minimizing risks—that they did a better job of preventing things from going wrong. But, to my surprise, they didn’t. Their complication rates after surgery were almost the same as others. Instead, what they proved to be really great at was rescuing people when they had a complication, preventing failures from becoming a catastrophe.
  • this is what distinguished the great from the mediocre. They didn’t fail less. They rescued more.
  • This may in fact be the real story of human and societal improvement. We talk a lot about “risk management”—a nice hygienic phrase. But in the end, risk is necessary. Things can and will go wrong. Yet some have a better capacity to prepare for the possibility, to limit the damage, and to sometimes even retrieve success from failure.
  • When things go wrong, there seem to be three main pitfalls to avoid, three ways to fail to rescue. You could choose a wrong plan, an inadequate plan, or no plan at all. Say you’re cooking and you inadvertently set a grease pan on fire. Throwing gasoline on the fire would be a completely wrong plan. Trying to blow the fire out would be inadequate. And ignoring it—“Fire? What fire?”—would be no plan at all.
  • All policies court failure—our war in Iraq, for instance, or the effort to stimulate our struggling economy. But when you refuse to even acknowledge that things aren’t going as expected, failure can become a humanitarian disaster. The sooner you’re able to see clearly that your best hopes and intentions have gone awry, the better. You have more room to pivot and adjust. You have more of a chance to rescue.
  • But recognizing that your expectations are proving wrong—accepting that you need a new plan—is commonly the hardest thing to do. We have this problem called confidence. To take a risk, you must have confidence in yourself
  • Yet you cannot blind yourself to failure, either. Indeed, you must prepare for it. For, strangely enough, only then is success possible.
  • So you will take risks, and you will have failures. But it’s what happens afterward that is defining. A failure often does not have to be a failure at all. However, you have to be ready for it—will you admit when things go wrong? Will you take steps to set them right?—because the difference between triumph and defeat, you’ll find, isn’t about willingness to take risks. It’s about mastery of rescue.
1More

Six Vintage-Inspired Animations on Critical Thinking | Brain Pickings - 0 views

  •  
    Australian outfit Bridge 8, who have the admirable mission of devising "creative strategies for science and society," have put together six fantastic two-minute animations on various aspects of critical thinking, aimed at kids ages 8 to 10 but also designed to resonate with grown-ups. Inspired by the animation style of the 1950s, most recognizably Saul Bass, the films are designed to promote a set of educational resources on critical thinking by TechNYou, an emerging technologies public information project funded by the Australian government.
11More

Other People's Suffering - NYTimes.com - 0 views

  • members of the upper class are more likely than others to behave unethically, to lie during negotiations, to drive illegally and to cheat when competing for a prize.“Greed is a robust determinant of unethical behavior,” the authors conclude. “Relative to lower-class individuals, individuals from upper-class backgrounds behaved more unethically in both naturalistic and laboratory settings.”
  • Our findings suggest that when a person is suffering, upper-class individuals perceive these signals less well on average, consistent with other findings documenting reduced empathic accuracy in upper-class individuals (Kraus et al., 2010). Taken together, these findings suggest that upper-class individuals may underestimate the distress and suffering in their social environments.
  • each participant was assigned to listen, face to face, from two feet away, to someone else describing real personal experiences of suffering and distress.The listeners’ responses were measured two ways, first by self-reported levels of compassion and second by electrocardiogram readings to determine the intensity of their emotional response. The participants all took a test known as the “sense of power” scale, ranking themselves on such personal strengths and weaknesses as ‘‘I can get people to listen to what I say’’ and ‘‘I can get others to do what I want,” as well as ‘‘My wishes do not carry much weight’’ and ‘‘Even if I voice them, my views have little sway,’’ which are reverse scored.The findings were noteworthy, to say the least. For “low-power” listeners, compassion levels shot up as the person describing suffering became more distressed. Exactly the opposite happened for “high-power” listeners: their compassion dropped as distress rose.
  • ...7 more annotations...
  • Who fits the stereotype of the rich and powerful described in this research? Mitt Romney. Empathy: “I’m not concerned about the very poor.” Compassion: “I like being able to fire people who provide services to me.” Sympathy for the disadvantaged: My wife “drives a couple of Cadillacs.” Willingness to lie in negotiations: “I was a severely conservative Republican governor.”
  • 48 percent described the Democratic Party as “weak,” compared to 28 percent who described the Republican Party that way. Conversely, 50 percent said the Republican Party is “cold hearted,” compared to 30 percent who said that was true of the Democrats.
  • This is the war that is raging throughout America. It is between conservatives, who emphasize personal responsibility and achievement, against liberals, who say the government must take from the wealthy and give to the poor. So it will be interesting this week to see if President Obama can rally the country to support his vision of a strong social compact. He has compassion on his side. Few Americans want to see their fellow citizens suffer. But the president does have that fiscal responsibility issue haunting him because the country remains in dire trouble.
  • For power holders, the world is viewed through an instrumental lens, and approach is directed toward those individuals who populate the useful parts of the landscape. Our results suggest that power not only channels its possessor’s energy toward goal completion but also targets and attempts to harness the energy of useful others. Thus, power appears to be a great facilitator of goal pursuit through a combination of intrapersonal and interpersonal processes. The nature of the power holder’s goals and interpersonal relationships ultimately determine how power is harnessed and what is accomplished in the end.
  • Republicans recognize the political usefulness of objectification, capitalizing on “compassion fatigue,” or the exhaustion of empathy, among large swathes of the electorate who are already stressed by the economic collapse of 2008, high levels of unemployment, an epidemic of foreclosures, stagnant wages and a hyper-competitive business arena.
  • . Republican debates provided further evidence of compassion fatigue when audiences cheered the record-setting use of the death penalty in Texas and applauded the prospect of a gravely ill pauper who, unable to pay medical fees, was allowed to die.Even Rick Santorum, who has been described by the National Review as holding “unstinting devotion to human dignity” and as fluent in “the struggles of the working class,” wants to slash aid to the poor. At a Feb. 21 gathering of 500 voters in Maricopa County, Ariz., Santorum brought the audience to its feet as he declared:We need to take everything from food stamps to Medicaid to the housing programs to education and training programs, we need to cut them, cap them, freeze them, send them to the states, say that there has to be a time limit and a work requirement, and be able to give them the flexibility to do those programs here at the state level.
  • President Obama has a substantial advantage this year because he does not have a primary challenger, which frees him from the need to emphasize his advocacy for the disempowered — increasing benefits or raising wages for the poor. This allows him to pick and chose the issues he wants to address.At the same time, compassion fatigue may make it easier for the Republican nominee to overcome the liabilities stemming from his own primary rhetoric, to reach beyond the core of the party to white centrist voters less openly drawn to hard-edged conservatism. With their capacity for empathy frayed by a pervasive sense of diminishing opportunity and encroaching shortfall, will these voters once again become dependable Republicans in 2012?
  •  
    Do you agree with Edsall? I think he is definitely taking an anti-Republican stance, but the findings are interesting.
12More

The Rediscovery of Character - NYTimes.com - 0 views

  • broken windows was only a small piece of what Wilson contributed, and he did not consider it the center of his work. The best way to understand the core Wilson is by borrowing the title of one of his essays: “The Rediscovery of Character.”
  • When Wilson began looking at social policy, at the University of Redlands, the University of Chicago and Harvard, most people did not pay much attention to character. The Marxists looked at material forces. Darwinians at the time treated people as isolated products of competition. Policy makers of right and left thought about how to rearrange economic incentives. “It is as if it were a mark of sophistication for us to shun the language of morality in discussing the problems of mankind,” he once recalled.
  • during the 1960s and ’70s, he noticed that the nation’s problems could not be understood by looking at incentives
  • ...9 more annotations...
  • “At root,” Wilson wrote in 1985 in The Public Interest, “in almost every area of important concern, we are seeking to induce persons to act virtuously, whether as schoolchildren, applicants for public assistance, would-be lawbreakers or voters and public officials.”
  • When Wilson wrote about character and virtue, he didn’t mean anything high flown or theocratic. It was just the basics, befitting a man who grew up in the middle-class suburbs of Los Angeles in the 1940s: Behave in a balanced way. Think about the long-term consequences of your actions. Cooperate. Be decent.
  • Wilson argued that American communities responded to the stresses of industrialization by fortifying self-control.
  • It was habituated by practicing good manners, by being dependable, punctual and responsible day by day.
  • Wilson set out to learn how groups created a good order, why that order sometimes frayed.
  • In “The Moral Sense,” he brilliantly investigated the virtuous sentiments we are born with and how they are cultivated by habit. Wilson’s broken windows theory was promoted in an essay with George Kelling called “Character and Community.” Wilson and Kelling didn’t think of crime primarily as an individual choice. They saw it as something that emerged from the social psychology of a community. When neighborhoods feel disorganized and scary, crime increases.
  • he emphasized that character was formed in groups. As he wrote in “The Moral Sense,” his 1993 masterpiece, “Order exists because a system of beliefs and sentiments held by members of a society sets limits to what those members can do.”
  • But America responded to the stresses of the information economy by reducing the communal buttresses to self-control, with unfortunate results.
  • Wilson was not a philosopher. He was a social scientist. He just understood that people are moral judgers and moral actors, and he reintegrated the vocabulary of character into discussions of everyday life.
37More

The American Scholar: The Disadvantages of an Elite Education - William Deresiewicz - 1 views

  • the last thing an elite education will teach you is its own inadequacy
  • I’m talking about the whole system in which these skirmishes play out. Not just the Ivy League and its peer institutions, but also the mechanisms that get you there in the first place: the private and affluent public “feeder” schools, the ever-growing parastructure of tutors and test-prep courses and enrichment programs, the whole admissions frenzy and everything that leads up to and away from it. The message, as always, is the medium. Before, after, and around the elite college classroom, a constellation of values is ceaselessly inculcated.
  • The first disadvantage of an elite education, as I learned in my kitchen that day, is that it makes you incapable of talking to people who aren’t like you. Elite schools pride themselves on their diversity, but that diversity is almost entirely a matter of ethnicity and race. With respect to class, these schools are largely—indeed increasingly—homogeneous. Visit any elite campus in our great nation and you can thrill to the heartwarming spectacle of the children of white businesspeople and professionals studying and playing alongside the children of black, Asian, and Latino businesspeople and professionals.
  • ...34 more annotations...
  • My education taught me to believe that people who didn’t go to an Ivy League or equivalent school weren’t worth talking to, regardless of their class. I was given the unmistakable message that such people were beneath me.
  • The existence of multiple forms of intelligence has become a commonplace, but however much elite universities like to sprinkle their incoming classes with a few actors or violinists, they select for and develop one form of intelligence: the analytic.
  • Students at places like Cleveland State, unlike those at places like Yale, don’t have a platoon of advisers and tutors and deans to write out excuses for late work, give them extra help when they need it, pick them up when they fall down.
  • When people say that students at elite schools have a strong sense of entitlement, they mean that those students think they deserve more than other people because their SAT scores are higher.
  • The political implications should be clear. As John Ruskin told an older elite, grabbing what you can get isn’t any less wicked when you grab it with the power of your brains than with the power of your fists.
  • students at places like Yale get an endless string of second chances. Not so at places like Cleveland State.
  • The second disadvantage, implicit in what I’ve been saying, is that an elite education inculcates a false sense of self-worth. Getting to an elite college, being at an elite college, and going on from an elite college—all involve numerical rankings: SAT, GPA, GRE. You learn to think of yourself in terms of those numbers. They come to signify not only your fate, but your identity; not only your identity, but your value.
  • For the elite, there’s always another extension—a bailout, a pardon, a stint in rehab—always plenty of contacts and special stipends—the country club, the conference, the year-end bonus, the dividend.
  • In short, the way students are treated in college trains them for the social position they will occupy once they get out. At schools like Cleveland State, they’re being trained for positions somewhere in the middle of the class system, in the depths of one bureaucracy or another. They’re being conditioned for lives with few second chances, no extensions, little support, narrow opportunity—lives of subordination, supervision, and control, lives of deadlines, not guidelines. At places like Yale, of course, it’s the reverse.
  • Elite schools nurture excellence, but they also nurture what a former Yale graduate student I know calls “entitled mediocrity.”
  • An elite education gives you the chance to be rich—which is, after all, what we’re talking about—but it takes away the chance not to be. Yet the opportunity not to be rich is one of the greatest opportunities with which young Americans have been blessed. We live in a society that is itself so wealthy that it can afford to provide a decent living to whole classes of people who in other countries exist (or in earlier times existed) on the brink of poverty or, at least, of indignity. You can live comfortably in the United States as a schoolteacher, or a community organizer, or a civil rights lawyer, or an artist
  • The liberal arts university is becoming the corporate university, its center of gravity shifting to technical fields where scholarly expertise can be parlayed into lucrative business opportunities.
  • You have to live in an ordinary house instead of an apartment in Manhattan or a mansion in L.A.; you have to drive a Honda instead of a BMW or a Hummer; you have to vacation in Florida instead of Barbados or Paris, but what are such losses when set against the opportunity to do work you believe in, work you’re suited for, work you love, every day of your life? Yet it is precisely that opportunity that an elite education takes away. How can I be a schoolteacher—wouldn’t that be a waste of my expensive education?
  • Isn’t it beneath me? So a whole universe of possibility closes, and you miss your true calling.
  • This is not to say that students from elite colleges never pursue a riskier or less lucrative course after graduation, but even when they do, they tend to give up more quickly than others.
  • But if you’re afraid to fail, you’re afraid to take risks, which begins to explain the final and most damning disadvantage of an elite education: that it is profoundly anti-intellectual.
  • being an intellectual is not the same as being smart. Being an intellectual means more than doing your homework.
  • The system forgot to teach them, along the way to the prestige admissions and the lucrative jobs, that the most important achievements can’t be measured by a letter or a number or a name. It forgot that the true purpose of education is to make minds, not careers.
  • Being an intellectual means, first of all, being passionate about ideas—and not just for the duration of a semester, for the sake of pleasing the teacher, or for getting a good grade.
  • Only a small minority have seen their education as part of a larger intellectual journey, have approached the work of the mind with a pilgrim soul. These few have tended to feel like freaks, not least because they get so little support from the university itself. Places like Yale, as one of them put it to me, are not conducive to searchers. GA_googleFillSlot('Rectangle_InArticle_Right'); GA_googleCreateDomIframe("google_ads_div_Rectangle_InArticle_Right_ad_container" ,"Rectangle_InArticle_Right"); Places like Yale are simply not set up to help students ask the big questions
  • Professors at top research institutions are valued exclusively for the quality of their scholarly work; time spent on teaching is time lost. If students want a conversion experience, they’re better off at a liberal arts college.
  • When elite universities boast that they teach their students how to think, they mean that they teach them the analytic and rhetorical skills necessary for success in law or medicine or science or business.
  • Although the notion of breadth is implicit in the very idea of a liberal arts education, the admissions process increasingly selects for kids who have already begun to think of themselves in specialized terms—the junior journalist, the budding astronomer, the language prodigy. We are slouching, even at elite schools, toward a glorified form of vocational training.
  • There’s a reason elite schools speak of training leaders, not thinkers—holders of power, not its critics. An independent mind is independent of all allegiances, and elite schools, which get a large percentage of their budget from alumni giving, are strongly invested in fostering institutional loyalty.
  • At a school like Yale, students who come to class and work hard expect nothing less than an A-. And most of the time, they get it.
  • Yet there is a dimension of the intellectual life that lies above the passion for ideas, though so thoroughly has our culture been sanitized of it that it is hardly surprising if it was beyond the reach of even my most alert students. Since the idea of the intellectual emerged in the 18th century, it has had, at its core, a commitment to social transformation. Being an intellectual means thinking your way toward a vision of the good society and then trying to realize that vision by speaking truth to power.
  • It takes more than just intellect; it takes imagination and courage.
  • Being an intellectual begins with thinking your way outside of your assumptions and the system that enforces them. But students who get into elite schools are precisely the ones who have best learned to work within the system, so it’s almost impossible for them to see outside it, to see that it’s even there.
  • Paradoxically, the situation may be better at second-tier schools and, in particular, again, at liberal arts colleges than at the most prestigious universities. Some students end up at second-tier schools because they’re exactly like students at Harvard or Yale, only less gifted or driven. But others end up there because they have a more independent spirit. They didn’t get straight A’s because they couldn’t be bothered to give everything in every class. They concentrated on the ones that meant the most to them or on a single strong extracurricular passion or on projects that had nothing to do with school
  • I’ve been struck, during my time at Yale, by how similar everyone looks. You hardly see any hippies or punks or art-school types, and at a college that was known in the ’80s as the Gay Ivy, few out lesbians and no gender queers. The geeks don’t look all that geeky; the fashionable kids go in for understated elegance. Thirty-two flavors, all of them vanilla.
  • The most elite schools have become places of a narrow and suffocating normalcy. Everyone feels pressure to maintain the kind of appearance—and affect—that go with achievement
  • Now that students are in constant electronic contact, they never have trouble finding each other. But it’s not as if their compulsive sociability is enabling them to develop deep friendships.
  • What happens when busyness and sociability leave no room for solitude? The ability to engage in introspection, I put it to my students that day, is the essential precondition for living an intellectual life, and the essential precondition for introspection is solitude
  • the life of the mind is lived one mind at a time: one solitary, skeptical, resistant mind at a time. The best place to cultivate it is not within an educational system whose real purpose is to reproduce the class system.
3More

Google Brings New Meaning to the Web - IEEE Spectrum - 0 views

  • Google’s new push to make sense of the Web in terms of “things, not strings,” to use the company’s catchphrase. Instead of just indexing Web documents by the words they contain, “we really need to understand about things in the real world,” says Shashi Thakur, technical lead on the Knowledge Graph project, which some see as a stepping stone to a long-sought system called the Semantic Web.
  • Google’s Knowledge Graph adds a new dimension to searches, because the company now keeps track of what many search terms mean. That’s what allows the system to recognize the connection between Margaret Thatcher (the person) and Grantham (her place of birth)—not because the two strings show up together on a lot of Web pages.
  • reebase was just one building block Google used to create Knowledge Graph. “We’re definitely open to using any data we have privileges to use,” says Thakur, who gives the CIA World Factbook and Wikipedia as examples of other collections of public knowledge that he and his colleagues tapped. He says that Google has also licensed certain data sets and points out that the company has some large data sets developed internally—the information collected for Google Maps, for example. “All of those go into the Knowledge Graph,” says Thakur.
11More

How Life Began: New Clues | TIME.com - 0 views

  • Astronomers recently announced that there could be an astonishing 20 billion Earthlike planets in the Milky Way
  • How abundant life actually is, however, hinges on one crucial factor: given the right conditions and the right raw materials,
  • what is the mathematical likelihood that life will actually would arise?
  • ...8 more annotations...
  • biology would have to be popping up all over the place.
  • Andrew Ellington, of the Center for Systems and Synthetic Biology at the University of Texas, Austin, “I can’t tell you what the probability is. It’s a chapter of the story that’s pretty much blank.”
  • Given that rather bleak-sounding assessment, it may be surprising to learn that Ellington is actually pretty upbeat. But that’s how he and two colleagues come across in a paper in the latest Science. The crucial step from nonliving stuff to a live cell is still a mystery, they acknowledge, but the number of pathways a mix of inanimate chemicals could have taken to reach the threshold of the living turns out to be many and varied. “It’s difficult to say exactly how things did occur,” says Ellington. “But there are many ways it could have occurred.
  • The first stab at answering the question came all the way back in the 1950s, when chemists Stanley Miller and Harold Urey passed an electrical spark through a beaker containing methane, ammonia, water vapor and hydrogen, thought at the time to represent Earth’s primordial atmosphere.
  • Scientists have learned so much, in fact, that the number of places life might have begun has grown to include such disparate locations as the hydrothermal vents at the bottom of the ocean; beds of clay; the billowing clouds of gas emerging from volcanoes; and the spaces in between ice crystals.
  • The number of ideas about how the key step from organic chemicals to living organisms might have been taken has multiplied as well: there’s the “RNA world hypothesis” and the “lipid world hypothesis” and the “iron-sulfur world hypothesis” and more, all of them dependent on a particular set of chemical circumstances and a particular set of dynamics and all highly speculative.
  • “Maybe when they do,” says Ellington, “we’ll all do a face-plant because it turns out to be so obvious in retrospect.” But even if they succeed, it will only prove that a manufactured cell could represent the earliest life forms, not that it actually does. “It will be a story about what we think might have happened, but it will still be a story.”
  • The story Ellington and his colleagues have been able to tell already, however, is a reason for optimism. We still don’t know the odds that life will arise under the right conditions. But the underlying biochemistry is abundantly, ubiquitously available—and it would take an awfully perverse universe to take things so far only to shut them down at the last moment.
41More

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
6More

Interview: Ted Chiang | The Asian American Literary Review - 0 views

  • I think most people’s ideas of science fiction are formed by Hollywood movies, so they think most science fiction is a special effects-driven story revolving around a battle between good and evil
  • I don’t think of that as a science fiction story. You can tell a good-versus-evil story in any time period and in any setting. Setting it in the future and adding robots to it doesn’t make it a science fiction story.
  • I think science fiction is fundamentally a post-industrial revolution form of storytelling. Some literary critics have noted that the good-versus-evil story follows a pattern where the world starts out as a good place, evil intrudes, the heroes fight and eventually defeat evil, and the world goes back to being a good place. Those critics have said that this is fundamentally a conservative storyline because it’s about maintaining the status quo. This is a common story pattern in crime fiction, too—there’s some disruption to the order, but eventually order is restored. Science fiction offers a different kind of story, a story where the world starts out as recognizable and familiar but is disrupted or changed by some new discovery or technology. At the end of the story, the world is changed permanently. The original condition is never restored. And so in this sense, this story pattern is progressive because its underlying message is not that you should maintain the status quo, but that change is inevitable. The consequences of this new discovery or technology—whether they’re positive or negative—are here to stay and we’ll have to deal with them.
  • ...3 more annotations...
  • There’s also a subset of this progressive story pattern that I’m particularly interested in, and that’s the “conceptual breakthrough” story, where the characters discover something about the nature of the universe which radically expands their understanding of the world.  This is a classic science fiction storyline.
  • one of the cool things about science fiction is that it lets you dramatize the process of scientific discovery, that moment of suddenly understanding something about the universe. That is what scientists find appealing about science, and I enjoy seeing the same thing in science fiction.
  • when you mention myth or mythic structure, yes, I don’t think myths can do that, because in general, myths reflect a pre-industrial view of the world. I don’t know if there is room in mythology for a strong conception of the future, other than an end-of-the-world or Armageddon scenario …
« First ‹ Previous 61 - 80 of 638 Next › Last »
Showing 20 items per page