Skip to main content

Home/ TOK Friends/ Group items tagged skepticism

Rss Feed Group items tagged

Emily Horwitz

Pigeon Code Baffles British Cryptographers - NYTimes.com - 0 views

  • They have eavesdropped on the enemy for decades, tracking messages from Hitler’s high command and the Soviet K.G.B. and on to the murky, modern world of satellites and cyberspace. But a lowly and yet mysterious carrier pigeon may have them baffled.
  • igeon specialists said they believed it may have been flying home from British units in France around the time of the D-Day landing in 1944 when it somehow expired in the chimney at the 17th-century home where it was found in the village of Bletchingley, south of London.
  • “Unless we get rather more idea than we have about who sent this message and who it was sent to, we are not going to be able to find out what the underlying code was,” said the historian, who was identified only as Tony under the organization’s secrecy protocols.
  • ...3 more annotations...
  • s code-breaking and communications interception unit in Gloucestershire, agreed to try to crack the code. But on Friday the secretive organization acknowledged that it had been unable to do so.
  • “Without access to the relevant code books and details of any additional encryption used, it will remain impossible to decrypt,” the Government Communications Headquarters said in a news release.
  • Mr. Martin said he was skeptical of the idea that the agency had been unable to crack the code. “I think there’s something about that message that is either sensitive or does not reflect well” on British special forces operating behind enemy lines in wartime France, he said in a telephone interview. “I’m convinced that it’s an important message and a secret message.”
Emily Horwitz

Scientists to Seek Clues to Violence in Genome of Gunman in Newtown, Conn. - NYTimes.com - 0 views

  • In a move likely to renew a longstanding ethical controversy, geneticists are quietly making plans to study the DNA of Adam Lanza, 20, who killed 20 children and seven adults in Newtown, Conn. Their work will be an effort to discover biological clues to extreme violence.
  • other experts speculated that the geneticists might look for mutations that might be associated with mental illnesses and ones that might also increase the risk for violence.
  • But whatever they do, this apparently is the first time researchers will attempt a detailed study of the DNA of a mass killer.
  • ...6 more annotations...
  • Dr. Arthur Beaudet, a professor at the Baylor College of Medicine and the chairman of its department of molecular and human genetics, applaud the effort. He believes that the acts committed by men like Mr. Lanza and the gunmen in other rampages in recent years — at Columbine High School and in Aurora, Colo., in Norway, in Tucson and at Virginia Tech — are so far off the charts of normal behavior that there must be genetic changes driving them.
  • Everything known about mental illness, these skeptics say, argues that there are likely to be hundreds of genes involved in extreme violent behavior, not to mention a variety of environmental influences, and that all of these factors can interact in complex and unpredictable ways.
  • The National Institutes of Health was embroiled in controversy about 20 years ago simply for proposing to study the biological underpinnings of violence. Critics accused researchers of racism and singling out minorities, especially black men.
  • Studies of people at the far end of a bell curve can be especially informative, because the genetic roots of their conditions can be stark and easy to spot, noted J. H. Pate Skene, a Duke University neurobiologist. “I think doing research on outliers, people at an end of a spectrum on something of concern like violent behavior, is certainly a good idea,” he said, but he advised tempering expectations.
  • “If we know someone has a 2 percent chance or a 10 percent chance or a 20 percent chance of violent behavior, what would you do with that person?” Dr. Skene said. “They have not been convicted of anything — have not done anything wrong.”
  • Ultimately, understanding the genetics of violence might enable researchers to find ways to intervene before a person commits a horrific crime. But that goal would be difficult to achieve, and the pursuit of it risks jeopardizing personal liberties.
Grace Carey

News at Tipitaka Network - 0 views

  •  
    Finding some interesting and very much TOK articles while I'm working on my religious investigation about the science behind Buddhist beliefs. I found this one particularly intriguing as it discusses why the theory of reincarnation is scientifically sound and why scientists are often narrow-minded and overly trusted. "I was once told by a Buddhist G.P. that, on his first day at a medical school in Sydney, the famous Professor, head of the Medical School, began his welcoming address by stating "Half of what we are going to teach you in the next few years is wrong. Our problem is that we do not know which half it is!" Those were the words of a real scientist." "Logic is only as reliable as the assumptions on which it is based." "Objective experience is that which is free from all bias. In Buddhism, the three types of bias are desire, ill-will and skeptical doubt. Desire makes one see only what one wants to see, it bends the truth to fit one's preferences." "Reality, according to pure science, does not consist of well ordered matter with precise massed, energies and positions in space, all just waiting to be measured. Reality is the broadest of smudges of all possibilities, only some being more probable than others." "At a recent seminar on Science and Religion, at which I was a speaker, a Catholic in the audience bravely announced that whenever she looks through a telescope at the stars, she feels uncomfortable because her religion is threatened. I commented that whenever a scientist looks the other way round through a telescope, to observe the one who is watching, then they feel uncomfortable because their science is threatened by what is doing the seeing! "
Javier E

Florence and the Drones - NYTimes.com - 1 views

  • The conventional view is that Machiavelli believed that since people are brutes then everything is permitted. Leaders should do anything they can to hold power. The ends justify the means.
  • In fact, Machiavelli was a moralistic thinker.
  • He just had a different concept of political virtue. It would be nice, he writes, if a political leader could practice the Christian virtues like charity, mercy and gentleness and still provide for his people. But, in the real world, that’s usually not possible. In the real world, a great leader is called upon to create a civilized order for the city he serves. To create that order, to defeat the forces of anarchy and savagery, the virtuous leader is compelled to do hard things, to take, as it were, the sins of the situation upon himself.
  • ...8 more annotations...
  • The leader who does good things cannot always be good himself. Sometimes bad acts produce good outcomes. Sometimes a leader has to love his country more than his soul.
  • Since a leader is forced by circumstances to do morally suspect things, Machiavelli at least wants him to do them effectively
  • When you read Machiavelli, you realize how lucky we are. Unlike 16th-century Florence, we have a good Constitution that channels conflict. We have manners, respect for law and social trust that softens behavior, at least a bit. Even in the realm of foreign affairs, we’ve inherited an international order that restrains conflict. Our ancestors behaved savagely to build our world, so we don’t have to.
  • But it’s still not possible to rule with perfectly clean hands. There are still terrorists out there, hiding in the shadows and plotting to kill Americans. So even today’s leaders face the Machiavellian choice: Do I have to be brutal to protect the people I serve? Do I have to use drones, which sometimes kill innocent children, in order to thwart terror and save the lives of my own?
  • When Barack Obama was a senator, he wasn’t compelled to confront the brutal logic of leadership. Now in office, he’s thrown into the Machiavellian world. He’s decided, correctly, that we are in a long war against Al Qaeda; that drone strikes do effectively kill terrorists; that, in fact, they inflict fewer civilian deaths than bombing campaigns, boots on the ground or any practical alternative; that, in fact, civilian death rates are dropping sharply as the C.I.A. gets better at this. Acting brutally abroad saves lives at home.
  • Machiavelli tells us that men are venal self-deceivers, but then he gives his Prince permission to do all these monstrous things, trusting him not to get carried away or turn into a monster himself.
  • Our founders were more careful. Our founders understood that leaders are as venal and untrustworthy as anybody else. They abhorred concentrated power, and they set up checks and balances to disperse it.
  • If you take Machiavelli’s tough-minded view of human nature, you have to be brutal to your enemies — but you also have to set up skeptical checks on the people you empower to destroy them.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
demetriar

Are We Really Conscious? - NYTimes.com - 1 views

  • Third, what is the relationship between our minds and the physical world? Here, we don’t have a settled answer.
  • I believe a major change in our perspective on consciousness may be necessary, a shift from a credulous and egocentric viewpoint to a skeptical and slightly disconcerting one: namely, that we don’t actually have inner feelings in the way most of us think we do.
  • The brain builds models (or complex bundles of information) about items in the world, and those models are often not accurate.
  • ...1 more annotation...
  • In the attention schema theory, attention is the physical phenomenon and awareness is the brain’s approximate, slightly incorrect model of it.
Javier E

On Climate, Republicans and Democrats Are From Different Continents - NYTimes.com - 3 views

  • Americans are less worried about climate change than the residents of any other high-income country, as my colleague Megan Thee-Brennan wrote Tuesday. When you look at the details of these polls, you see that American exceptionalism on the climate stems almost entirely from Republicans.
  • last year, 25 percent of self-identified Republicans said they considered global climate change to be “a major threat.” The only countries with such low levels of climate concern are Egypt, where 16 percent of respondents called climate change a major threat, and Pakistan, where 15 percent did.
  • The Republican skepticism about climate change extends across the party, though it’s strongest among those who consider themselves part of the Tea Party. Ten percent of those aligned with the Tea Party called climate change a major threat, compared with 35 percent of Republicans who did not identify with the Tea Party.
  • ...1 more annotation...
  • these patterns match recent political events. In international negotiations, the United States has been less interested in taking steps to slow global warming than many other rich countries. President Obama and a majority of Democrats favored a bill that would have raised the cost of emitting carbon, and such a bill passed the House of Representatives in 2009. Strong opposition from Republicans in the Senate, as well as some Democrats from coal-producing states, defeated the bill there.
sissij

Online and Scared - The New York Times - 0 views

  • That is to say, a critical mass of our interactions had moved to a realm where we’re all connected but no one’s in charge.
  • And, I would argue, 2016 will be remembered as the year when we fully grasped just how scary that can be — how easy it was for a presidential candidate to tweet out untruths and half-truths faster than anyone could correct them, how cheap it was for Russia to intervene on Trump’s behalf with hacks of Democratic operatives’ computers and how unnerving it was to hear Yahoo’s chief information security officer, Bob Lord, say that his company still had “not been able to identify” how one billion Yahoo accounts and their sensitive user information were hacked in 2013.
  • Facebook — which wants all the readers and advertisers of the mainstream media but not to be saddled with its human editors and fact-checkers — is now taking more seriously its responsibilities as a news purveyor in cyberspace.
  • ...3 more annotations...
  • And that begins with teaching them that the internet is an open sewer of untreated, unfiltered information, where they need to bring skepticism and critical thinking to everything they read and basic civic decency to everything they write.
  • One assessment required middle schoolers to explain why they might not trust an article on financial planning that was written by a bank executive and sponsored by a bank.
  • Many people assume that because young people are fluent in social media they are equally perceptive about what they find there. Our work shows the opposite to be true.
  •  
    Internet has always been a big issue since more and more people, especially teenager spend most of their time on. Internet as a social media deliver information faster and wider than any other traditional media. The mode of information spreading is more of that the internet reveals issue and the traditional media such as television follows up and provide more detailed information. However, as internet develops, we also need to develop some rules and restrictions. We underestimate how dangerous internet can be if it is weaponized. However, there is a dilemma. Since internet is popular because of the unlimited freedom people feel online, as the police and authority gets involved in, people would ultimately lose that freedom. The censorship in China is a good example to see how people will respond to setting rules to the internet. There should some sort of balance that we can strive for in the future. --Sissi (1/11/2017)
Javier E

Charlie Sykes on Where the Right Went Wrong - The New York Times - 0 views

  • t I have to admit that the campaign has made my decision easier. The conservative media is broken and the conservative movement deeply compromised.
  • Before this year, I thought I had a relatively solid grasp on what conservatism stood for and where it was going
  • I was under the impression that conservatives actually believed things about free trade, balanced budgets, character and respect for constitutional rights. Then along came this campaign.
  • ...15 more annotations...
  • When I wrote in August 2015 that Mr. Trump was a cartoon version of every left-wing media stereotype of the reactionary, nativist, misogynist right, I thought that I was well within the mainstream of conservative thought — only to find conservative Trump critics denounced for apostasy by a right that decided that it was comfortable with embracing Trumpism.
  • relatively few of my listeners bought into the crude nativism Mr. Trump was selling at his rallies.
  • What they did buy into was the argument that this was a “binary choice.” No matter how bad Mr. Trump was, my listeners argued, he could not possibly be as bad as Mrs. Clinton. You simply cannot overstate this as a factor in the final outcome
  • Even among Republicans who had no illusions about Mr. Trump’s character or judgment, the demands of that tribal loyalty took precedence. To resist was an act of betrayal.
  • In this binary tribal world, where everything is at stake, everything is in play, there is no room for quibbles about character, or truth, or principles.
  • If everything — the Supreme Court, the fate of Western civilization, the survival of the planet — depends on tribal victory, then neither individuals nor ideas can be determinative.
  • As our politics have become more polarized, the essential loyalties shift from ideas, to parties, to tribes, to individuals. Nothing else ultimately matters.
  • For many listeners, nothing was worse than Hillary Clinton. Two decades of vilification had taken their toll: Listeners whom I knew to be decent, thoughtful individuals began forwarding stories with conspiracy theories about President Obama and Mrs. Clinton — that he was a secret Muslim, that she ran a child sex ring out of a pizza parlor. When I tried to point out that such stories were demonstrably false, they generally refused to accept evidence that came from outside their bubble. The echo chamber had morphed into a full-blown alternate reality silo of conspiracy theories, fake news and propaganda.
  • In this political universe, voters accept that they must tolerate bizarre behavior, dishonesty, crudity and cruelty, because the other side is always worse; the stakes are such that no qualms can get in the way of the greater cause.
  • When it became clear that I was going to remain #NeverTrump, conservatives I had known and worked with for more than two decades organized boycotts of my show. One prominent G.O.P. activist sent out an email blast calling me a “Judas goat,” and calling for postelection retribution.
  • And then, there was social media. Unless you have experienced it, it’s difficult to describe the virulence of the Twitter storms that were unleashed on Trump skeptics. In my timelines, I found myself called a “cuckservative,” a favorite gibe of white nationalists; and someone Photoshopped my face into a gas chamber. Under the withering fire of the trolls, one conservative commentator and Republican political leader after another fell in line.
  • we had succeeded in persuading our audiences to ignore and discount any information from the mainstream media. Over time, we’d succeeded in delegitimizing the media altogether — all the normal guideposts were down, the referees discredited.
  • That left a void that we conservatives failed to fill. For years, we ignored the birthers, the racists, the truthers and other conspiracy theorists who indulged fantasies of Mr. Obama’s secret Muslim plot to subvert Christendom, or who peddled baseless tales of Mrs. Clinton’s murder victims. Rather than confront the purveyors of such disinformation, we changed the channel because, after all, they were our allies, whose quirks could be allowed or at least ignored
  • We destroyed our own immunity to fake news, while empowering the worst and most reckless voices on the right.
  • This was not mere naïveté. It was also a moral failure, one that now lies at the heart of the conservative movement even in its moment of apparent electoral triumph. Now that the election is over, don’t expect any profiles in courage from the Republican Party pushing back against those trends; the gravitational pull of our binary politics is too strong.
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
kushnerha

If Philosophy Won't Diversify, Let's Call It What It Really Is - The New York Times - 0 views

  • The vast majority of philosophy departments in the United States offer courses only on philosophy derived from Europe and the English-speaking world. For example, of the 118 doctoral programs in philosophy in the United States and Canada, only 10 percent have a specialist in Chinese philosophy as part of their regular faculty. Most philosophy departments also offer no courses on Africana, Indian, Islamic, Jewish, Latin American, Native American or other non-European traditions. Indeed, of the top 50 philosophy doctoral programs in the English-speaking world, only 15 percent have any regular faculty members who teach any non-Western philosophy.
  • Given the importance of non-European traditions in both the history of world philosophy and in the contemporary world, and given the increasing numbers of students in our colleges and universities from non-European backgrounds, this is astonishing. No other humanities discipline demonstrates this systematic neglect of most of the civilizations in its domain. The present situation is hard to justify morally, politically, epistemically or as good educational and research training practice.
  • While a few philosophy departments have made their curriculums more diverse, and while the American Philosophical Association has slowly broadened the representation of the world’s philosophical traditions on its programs, progress has been minimal.
  • ...9 more annotations...
  • Many philosophers and many departments simply ignore arguments for greater diversity; others respond with arguments for Eurocentrism that we and many others have refuted elsewhere. The profession as a whole remains resolutely Eurocentric.
  • Instead, we ask those who sincerely believe that it does make sense to organize our discipline entirely around European and American figures and texts to pursue this agenda with honesty and openness. We therefore suggest that any department that regularly offers courses only on Western philosophy should rename itself “Department of European and American Philosophy.”
  • We see no justification for resisting this minor rebranding (though we welcome opposing views in the comments section to this article), particularly for those who endorse, implicitly or explicitly, this Eurocentric orientation.
  • Some of our colleagues defend this orientation on the grounds that non-European philosophy belongs only in “area studies” departments, like Asian Studies, African Studies or Latin American Studies. We ask that those who hold this view be consistent, and locate their own departments in “area studies” as well, in this case, Anglo-European Philosophical Studies.
  • Others might argue against renaming on the grounds that it is unfair to single out philosophy: We do not have departments of Euro-American Mathematics or Physics. This is nothing but shabby sophistry. Non-European philosophical traditions offer distinctive solutions to problems discussed within European and American philosophy, raise or frame problems not addressed in the American and European tradition, or emphasize and discuss more deeply philosophical problems that are marginalized in Anglo-European philosophy. There are no comparable differences in how mathematics or physics are practiced in other contemporary cultures.
  • Of course, we believe that renaming departments would not be nearly as valuable as actually broadening the philosophical curriculum and retaining the name “philosophy.” Philosophy as a discipline has a serious diversity problem, with women and minorities underrepresented at all levels among students and faculty, even while the percentage of these groups increases among college students. Part of the problem is the perception that philosophy departments are nothing but temples to the achievement of males of European descent. Our recommendation is straightforward: Those who are comfortable with that perception should confirm it in good faith and defend it honestly; if they cannot do so, we urge them to diversify their faculty and their curriculum.
  • This is not to disparage the value of the works in the contemporary philosophical canon: Clearly, there is nothing intrinsically wrong with philosophy written by males of European descent; but philosophy has always become richer as it becomes increasingly diverse and pluralistic.
  • We hope that American philosophy departments will someday teach Confucius as routinely as they now teach Kant, that philosophy students will eventually have as many opportunities to study the “Bhagavad Gita” as they do the “Republic,” that the Flying Man thought experiment of the Persian philosopher Avicenna (980-1037) will be as well-known as the Brain-in-a-Vat thought experiment of the American philosopher Hilary Putnam (1926-2016), that the ancient Indian scholar Candrakirti’s critical examination of the concept of the self will be as well-studied as David Hume’s, that Frantz Fanon (1925-1961), Kwazi Wiredu (1931- ), Lame Deer (1903-1976) and Maria Lugones will be as familiar to our students as their equally profound colleagues in the contemporary philosophical canon. But, until then, let’s be honest, face reality and call departments of European-American Philosophy what they really are.
  • For demographic, political and historical reasons, the change to a more multicultural conception of philosophy in the United States seems inevitable. Heed the Stoic adage: “The Fates lead those who come willingly, and drag those who do not.”
Javier E

[Six Questions] | Astra Taylor on The People's Platform: Taking Back Power and Culture ... - 1 views

  • Astra Taylor, a cultural critic and the director of the documentaries Zizek! and Examined Life, challenges the notion that the Internet has brought us into an age of cultural democracy. While some have hailed the medium as a platform for diverse voices and the free exchange of information and ideas, Taylor shows that these assumptions are suspect at best. Instead, she argues, the new cultural order looks much like the old: big voices overshadow small ones, content is sensationalist and powered by advertisements, quality work is underfunded, and corporate giants like Google and Facebook rule. The Internet does offer promising tools, Taylor writes, but a cultural democracy will be born only if we work collaboratively to develop the potential of this powerful resource
  • Most people don’t realize how little information can be conveyed in a feature film. The transcripts of both of my movies are probably equivalent in length to a Harper’s cover story.
  • why should Amazon, Apple, Facebook, and Google get a free pass? Why should we expect them to behave any differently over the long term? The tradition of progressive media criticism that came out of the Frankfurt School, not to mention the basic concept of political economy (looking at the way business interests shape the cultural landscape), was nowhere to be seen, and that worried me. It’s not like political economy became irrelevant the second the Internet was invented.
  • ...15 more annotations...
  • How do we reconcile our enjoyment of social media even as we understand that the corporations who control them aren’t always acting in our best interests?
  • hat was because the underlying economic conditions hadn’t been changed or “disrupted,” to use a favorite Silicon Valley phrase. Google has to serve its shareholders, just like NBCUniversal does. As a result, many of the unappealing aspects of the legacy-media model have simply carried over into a digital age — namely, commercialism, consolidation, and centralization. In fact, the new system is even more dependent on advertising dollars than the one that preceded it, and digital advertising is far more invasive and ubiquitous
  • the popular narrative — new communications technologies would topple the establishment and empower regular people — didn’t accurately capture reality. Something more complex and predictable was happening. The old-media dinosaurs weren’t dying out, but were adapting to the online environment; meanwhile the new tech titans were coming increasingly to resemble their predecessors
  • I use lots of products that are created by companies whose business practices I object to and that don’t act in my best interests, or the best interests of workers or the environment — we all do, since that’s part of living under capitalism. That said, I refuse to invest so much in any platform that I can’t quit without remorse
  • these services aren’t free even if we don’t pay money for them; we pay with our personal data, with our privacy. This feeds into the larger surveillance debate, since government snooping piggybacks on corporate data collection. As I argue in the book, there are also negative cultural consequences (e.g., when advertisers are paying the tab we get more of the kind of culture marketers like to associate themselves with and less of the stuff they don’t) and worrying social costs. For example, the White House and the Federal Trade Commission have both recently warned that the era of “big data” opens new avenues of discrimination and may erode hard-won consumer protections.
  • I’m resistant to the tendency to place this responsibility solely on the shoulders of users. Gadgets and platforms are designed to be addictive, with every element from color schemes to headlines carefully tested to maximize clickability and engagement. The recent news that Facebook tweaked its algorithms for a week in 2012, showing hundreds of thousands of users only “happy” or “sad” posts in order to study emotional contagion — in other words, to manipulate people’s mental states — is further evidence that these platforms are not neutral. In the end, Facebook wants us to feel the emotion of wanting to visit Facebook frequently
  • social inequalities that exist in the real world remain meaningful online. What are the particular dangers of discrimination on the Internet?
  • That it’s invisible or at least harder to track and prove. We haven’t figured out how to deal with the unique ways prejudice plays out over digital channels, and that’s partly because some folks can’t accept the fact that discrimination persists online. (After all, there is no sign on the door that reads Minorities Not Allowed.)
  • just because the Internet is open doesn’t mean it’s equal; offline hierarchies carry over to the online world and are even amplified there. For the past year or so, there has been a lively discussion taking place about the disproportionate and often outrageous sexual harassment women face simply for entering virtual space and asserting themselves there — research verifies that female Internet users are dramatically more likely to be threatened or stalked than their male counterparts — and yet there is very little agreement about what, if anything, can be done to address the problem.
  • What steps can we take to encourage better representation of independent and non-commercial media? We need to fund it, first and foremost. As individuals this means paying for the stuff we believe in and want to see thrive. But I don’t think enlightened consumption can get us where we need to go on its own. I’m skeptical of the idea that we can shop our way to a better world. The dominance of commercial media is a social and political problem that demands a collective solution, so I make an argument for state funding and propose a reconceptualization of public media. More generally, I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a “library” or Twitter a “town square” — or even calling social media “social” — but real public options are off the table, at least in the United States. We hand the digital commons over to private corporations at our peril.
  • 6. You advocate for greater government regulation of the Internet. Why is this important?
  • I’m for regulating specific things, like Internet access, which is what the fight for net neutrality is ultimately about. We also need stronger privacy protections and restrictions on data gathering, retention, and use, which won’t happen without a fight.
  • I challenge the techno-libertarian insistence that the government has no productive role to play and that it needs to keep its hands off the Internet for fear that it will be “broken.” The Internet and personal computing as we know them wouldn’t exist without state investment and innovation, so let’s be real.
  • there’s a pervasive and ill-advised faith that technology will promote competition if left to its own devices (“competition is a click away,” tech executives like to say), but that’s not true for a variety of reasons. The paradox of our current media landscape is this: our devices and consumption patterns are ever more personalized, yet we’re simultaneously connected to this immense, opaque, centralized infrastructure. We’re all dependent on a handful of firms that are effectively monopolies — from Time Warner and Comcast on up to Google and Facebook — and we’re seeing increased vertical integration, with companies acting as both distributors and creators of content. Amazon aspires to be the bookstore, the bookshelf, and the book. Google isn’t just a search engine, a popular browser, and an operating system; it also invests in original content
  • So it’s not that the Internet needs to be regulated but that these big tech corporations need to be subject to governmental oversight. After all, they are reaching farther and farther into our intimate lives. They’re watching us. Someone should be watching them.
qkirkpatrick

Can You Trust the News Media? - Watchtower ONLINE LIBRARY - 1 views

  • MANY people doubt what they read and hear in the news. In the United States, for example, a 2012 Gallup poll asked people “how much trust and confidence” they had in the accuracy, fairness, and completeness of the news reports of newspapers, TV, and radio. The answer from 6 out of 10 people was either “not very much” or “none at all.” Is such distrust justified?
  • Many journalists and the organizations they work for have expressed a commitment to producing accurate and informative reports. Yet, there is reason for concern. Consider the following factors:
  • MEDIA MOGULS. A small but very powerful number of corporations own primary media outlets.
  • ...4 more annotations...
  • GOVERNMENTS. Much of what we learn in the media has to do with the people and the affairs of government.
  • ADVERTISING. In most lands, media outlets must make money in order to stay in business, and most of it comes from advertising.
  • While it is wise not to believe everything we read in the news, it does not follow that there is nothing we can trust. The key may be to have a healthy skepticism, while keeping an open mind.
  • So, can you trust the news media? Sound advice is found in the wisdom of Solomon, who wrote: “Anyone inexperienced puts faith in every word, but the shrewd one considers his steps.”
  •  
    Can we trust the news media?
anonymous

Are search engines and the Internet hurting human memory? - Slate Magazine - 2 views

  • are we losing the power to retain knowledge? The short answer is: No. Machines aren’t ruining our memory. Advertisement The longer answer: It’s much, much weirder than that!
  • we’ve begun to fit the machines into an age-old technique we evolved thousands of years ago—“transactive memory.” That’s the art of storing information in the people around us.
  • frankly, our brains have always been terrible at remembering details. We’re good at retaining the gist of the information we encounter. But the niggly, specific facts? Not so much.
  • ...22 more annotations...
  • subjects read several sentences. When he tested them 40 minutes later, they could generally remember the sentences word for word. Four days later, though, they were useless at recalling the specific phrasing of the sentences—but still very good at describing the meaning of them.
  • When you’re an expert in a subject, you can retain new factoids on your favorite topic easily. This only works for the subjects you’re truly passionate about, though
  • The groups that scored highest on a test of their transactive memory—in other words, the groups where members most relied on each other to recall information—performed better than those who didn't use transactive memory. Transactive groups don’t just remember better: They also analyze problems more deeply, too, developing a better grasp of underlying principles.
  • Wegner noticed that spouses often divide up memory tasks. The husband knows the in-laws' birthdays and where the spare light bulbs are kept; the wife knows the bank account numbers and how to program the TiVo
  • Together, they know a lot. Separately, less so.
  • Wegner suspected this division of labor takes place because we have pretty good "metamemory." We're aware of our mental strengths and limits, and we're good at intuiting the memory abilities of others.
  • We share the work of remembering, Wegner argued, because it makes us collectively smarter
  • They were, in a sense, Googling each other.
  • Transactive memory works best when you have a sense of how your partners' minds work—where they're strong, where they're weak, where their biases lie. I can judge that for people close to me. But it's harder with digital tools, particularly search engines
  • So humanity has always relied on coping devices to handle the details for us. We’ve long stored knowledge in books, paper, Post-it notes
  • And as it turns out, this is what we’re doing with Google and Evernote and our other digital tools. We’re treating them like crazily memorious friends who are usually ready at hand. Our “intimate dyad” now includes a silicon brain.
  • When Sparrow tested the students, the people who knew the computer had saved the information were less likely to personally recall the info than the ones who were told the trivia wouldn't be saved. In other words, if we know a digital tool is going to remember a fact, we're slightly less likely to remember it ourselves
  • believing that one won't have access to the information in the future enhances memory for the information itself, whereas believing the information was saved externally enhances memory for the fact that the information could be accessed.
  • Just as we learn through transactive memory who knows what in our families and offices, we are learning what the computer 'knows' and when we should attend to where we have stored information in our computer-based memories,
  • We’ve stored a huge chunk of what we “know” in people around us for eons. But we rarely recognize this because, well, we prefer our false self-image as isolated, Cartesian brains
  • We’re dumber and less cognitively nimble if we're not around other people—and, now, other machines.
  • When humans spew information at us unbidden, it's boorish. When machines do it, it’s enticing.
  • Though you might assume search engines are mostly used to answer questions, some research has found that up to 40 percent of all queries are acts of remembering. We're trying to refresh the details of something we've previously encountered.
  • "the thinking processes of the intimate dyad."
  • We need to develop literacy in these tools the way we teach kids how to spell and write; we need to be skeptical about search firms’ claims of being “impartial” referees of information
  • And on an individual level, it’s still important to slowly study and deeply retain things, not least because creative thought—those breakthrough ahas—come from deep and often unconscious rumination, your brain mulling over the stuff it has onboard.
  • you can stop worrying about your iPhone moving your memory outside your head. It moved out a long time ago—yet it’s still all around you.
oliviaodon

How One Psychologist Is Tackling Human Biases in Science - 0 views

  • It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.
  • Peer review seems to be a more fallible instrument—especially in areas such as medicine and psychology—than is often appreciated, as the emerging “crisis of replicability” attests.
  • Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?
  • ...10 more annotations...
  • common response to this situation is to argue that, even if individual scientists might fool themselves, others have no hesitation in critiquing their ideas or their results, and so it all comes out in the wash: Science as a communal activity is self-correcting. Sometimes this is true—but it doesn’t necessarily happen as quickly or smoothly as we might like to believe.
  • The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan
  • He is convinced that the process and progress of science would be smoothed by bringing these biases to light—which means making research more transparent in its methods, assumptions, and interpretations
  • Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology.
  • Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea.
  • Sometimes it seems surprising that science functions at all.
  • Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”).
  • Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught.
  • Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.”
  • Nosek thinks that peer review might sometimes actively hinder clear and swift testing of scientific claims.
sissij

We're More Likely to Trust Strangers Over People We Know, Study Suggests | Big Think - 0 views

  • Social trust, the expectation that people will behave with good will and avoid harming others, is a concept that has long mystified both researchers and the general public alike.
  • evolution is about more than just rivalry, we need relationships.
  • What makes us inclined to give some people the benefit of the doubt, but occasionally cast a skeptical eye on others.
  • ...4 more annotations...
  • Substantial research has already been conducted in the realm of social trust, however, Freitag and Bauer contend that endemic methodological shortcomings have consistently yielded inconclusive data.
  • defined personality values using psychology's "Big Five" personality traits (agreeableness, openness to experience, extraversion, conscientiousness, and neuroticism-emotional stability)
  • The study results showed that people often trust total strangers more than they trust their friends.
  • Freitag and Bauer did highlight the limitations of their research. They expressly cautioned that the role of education, our networks, and our trust in institutions, matter and cannot be underestimated.
  •  
    I found it very interesting that social trust of human also follow the logic of revolution. "evolution is about more than just rivalry, we need relationships". As for my personal experience, sometimes I just have a natural impulse of doing some good. I feel like this impulse is completely out my control. Although many people would label me as a "kind" person, I don't think I am a kind person. I am "kind" because people seldom know me and I always keeps a distance with people. My parent obviously give a opposite comment about me. They know I am cruel and cold because we are not stranger. That's why I find keep distance with people is very convenient, such that they would never know the true side of you. --Sissi (4/11/2017)
Javier E

Eric A. Posner Reviews Jim Manzi's "Uncontrolled" | The New Republic - 0 views

  • Most urgent questions of public policy turn on empirical imponderables, and so policymakers fall back on ideological predispositions or muddle through. Is there a better way?
  • The gold standard for empirical research is the randomized field trial (RFT).
  • The RFT works better than most other types of empirical investigation. Most of us use anecdotes or common sense empiricism to make inferences about the future, but psychological biases interfere with the reliability of these methods
  • ...15 more annotations...
  • Serious empiricists frequently use regression analysis.
  • Regression analysis is inferior to RFT because of the difficulty of ruling out confounding factors (for example, that a gene jointly causes baldness and a preference for tight hats) and of establishing causation
  • RFT has its limitations as well. It is enormously expensive because you must (usually) pay a large number of people to participate in an experiment, though one can obtain a discount if one uses prisoners, especially those in a developing country. In addition, one cannot always generalize from RFTs.
  • academic research proceeds in fits and starts, using RFT when it can, but otherwise relying on regression analysis and similar tools, including qualitative case studies,
  • businesses also use RFT whenever they can. A business such as Wal-Mart, with thousands of stores, might try out some innovation like a new display in a random selection of stores, using the remaining stores as a control group
  • Manzi argues that the RFT—or more precisely, the overall approach to empirical investigation that the RFT exemplifies—provides a way of thinking about public policy. Thi
  • the universe is shaky even where, as in the case of physics, “hard science” plays the dominant role. The scientific method cannot establish truths; it can only falsify hypotheses. The hypotheses come from our daily experience, so even when science prunes away intuitions that fail the experimental method, we can never be sure that the theories that remain standing reflect the truth or just haven’t been subject to the right experiment. And even within its domain, the experimental method is not foolproof. When an experiment contradicts received wisdom, it is an open question whether the wisdom is wrong or the experiment was improperly performed.
  • The book is less interested in the RFT than in the limits of empirical knowledge. Given these limits, what attitude should we take toward government?
  • Much of scientific knowledge turns out to depend on norms of scientific behavior, good faith, convention, and other phenomena that in other contexts tend to provide an unreliable basis for knowledge.
  • Under this view of the world, one might be attracted to the cautious conservatism associated with Edmund Burke, the view that we should seek knowledge in traditional norms and customs, which have stood the test of time and presumably some sort of Darwinian competition—a human being is foolish, the species is wise. There are hints of this worldview in Manzi’s book, though he does not explicitly endorse it. He argues, for example, that we should approach social problems with a bias for the status quo; those who seek to change it carry the burden of persuasion. Once a problem is identified, we should try out our ideas on a small scale before implementing them across society
  • Pursuing the theme of federalism, Manzi argues that the federal government should institutionalize policy waivers, so states can opt out from national programs and pursue their own initiatives. A state should be allowed to opt out of federal penalties for drug crimes, for example.
  • It is one thing to say, as he does, that federalism is useful because we can learn as states experiment with different policies. But Manzi takes away much of the force of this observation when he observes, as he must, that the scale of many of our most urgent problems—security, the economy—is at the national level, so policymaking in response to these problems cannot be left to the states. He also worries about social cohesion, which must be maintained at a national level even while states busily experiment. Presumably, this implies national policy of some sort
  • Manzi’s commitment to federalism and his technocratic approach to policy, which relies so heavily on RFT, sit uneasily together. The RFT is a form of planning: the experimenter must design the RFT and then execute it by recruiting subjects, paying them, and measuring and controlling their behavior. By contrast, experimentation by states is not controlled: the critical element of the RFT—randomization—is absent.
  • The right way to go would be for the national government to conduct experiments by implementing policies in different states (or counties or other local units) by randomizing—that is, by ordering some states to be “treatment” states and other states to be “control” states,
  • Manzi’s reasoning reflects the top-down approach to social policy that he is otherwise skeptical of—although, to be sure, he is willing to subject his proposals to RFTs.
Javier E

I.Q. Points for Sale, Cheap - NYTimes.com - 1 views

  • Until recently, the overwhelming consensus in psychology was that intelligence was essentially a fixed trait. But in 2008, an article by a group of researchers led by Susanne Jaeggi and Martin Buschkuehl challenged this view and renewed many psychologists’ enthusiasm about the possibility that intelligence was trainable — with precisely the kind of tasks that are now popular as games.
  • it’s important to explain why we’re not sold on the idea.
  • There have been many attempts to demonstrate large, lasting gains in intelligence through educational interventions, with few successes. When gains in intelligence have been achieved, they have been modest and the result of many years of effort.
  • ...3 more annotations...
  • Web site PsychFileDrawer.org, which was founded as an archive for failed replication attempts in psychological research, maintains a Top 20 list of studies that its users would like to see replicated. The Jaeggi study is currently No. 1.
  • Another reason for skepticism is a weakness in the Jaeggi study’s design: it included only a single test of reasoning to measure gains in intelligence.
  • Demonstrating that subjects are better on one reasoning test after cognitive training doesn’t establish that they’re smarter. It merely establishes that they’re better on one reasoning test.
markfrankel18

The Cambridge Declaration on Consciousness - Shunya's Notes - 0 views

  • A congregation of scientists in Cambridge, UK, recently issued a formal declaration that lots of non-human animals, including mammals, birds, and likely even octopuses are conscious beings. What do they mean by consciousness, you ask? It's a state of awareness of one's body and one's environment, anywhere from basic perceptual awareness to the reflective self-awareness of humans. This declaration will surely strike many of us as ancient news and a long overdue recognition, even as it may annoy the stubborn skeptics among us. 
  • We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non- human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.
Javier E

The American Scholar: The Disadvantages of an Elite Education - William Deresiewicz - 1 views

  • the last thing an elite education will teach you is its own inadequacy
  • I’m talking about the whole system in which these skirmishes play out. Not just the Ivy League and its peer institutions, but also the mechanisms that get you there in the first place: the private and affluent public “feeder” schools, the ever-growing parastructure of tutors and test-prep courses and enrichment programs, the whole admissions frenzy and everything that leads up to and away from it. The message, as always, is the medium. Before, after, and around the elite college classroom, a constellation of values is ceaselessly inculcated.
  • The first disadvantage of an elite education, as I learned in my kitchen that day, is that it makes you incapable of talking to people who aren’t like you. Elite schools pride themselves on their diversity, but that diversity is almost entirely a matter of ethnicity and race. With respect to class, these schools are largely—indeed increasingly—homogeneous. Visit any elite campus in our great nation and you can thrill to the heartwarming spectacle of the children of white businesspeople and professionals studying and playing alongside the children of black, Asian, and Latino businesspeople and professionals.
  • ...34 more annotations...
  • My education taught me to believe that people who didn’t go to an Ivy League or equivalent school weren’t worth talking to, regardless of their class. I was given the unmistakable message that such people were beneath me.
  • The existence of multiple forms of intelligence has become a commonplace, but however much elite universities like to sprinkle their incoming classes with a few actors or violinists, they select for and develop one form of intelligence: the analytic.
  • Students at places like Cleveland State, unlike those at places like Yale, don’t have a platoon of advisers and tutors and deans to write out excuses for late work, give them extra help when they need it, pick them up when they fall down.
  • When people say that students at elite schools have a strong sense of entitlement, they mean that those students think they deserve more than other people because their SAT scores are higher.
  • The political implications should be clear. As John Ruskin told an older elite, grabbing what you can get isn’t any less wicked when you grab it with the power of your brains than with the power of your fists.
  • students at places like Yale get an endless string of second chances. Not so at places like Cleveland State.
  • The second disadvantage, implicit in what I’ve been saying, is that an elite education inculcates a false sense of self-worth. Getting to an elite college, being at an elite college, and going on from an elite college—all involve numerical rankings: SAT, GPA, GRE. You learn to think of yourself in terms of those numbers. They come to signify not only your fate, but your identity; not only your identity, but your value.
  • For the elite, there’s always another extension—a bailout, a pardon, a stint in rehab—always plenty of contacts and special stipends—the country club, the conference, the year-end bonus, the dividend.
  • In short, the way students are treated in college trains them for the social position they will occupy once they get out. At schools like Cleveland State, they’re being trained for positions somewhere in the middle of the class system, in the depths of one bureaucracy or another. They’re being conditioned for lives with few second chances, no extensions, little support, narrow opportunity—lives of subordination, supervision, and control, lives of deadlines, not guidelines. At places like Yale, of course, it’s the reverse.
  • Elite schools nurture excellence, but they also nurture what a former Yale graduate student I know calls “entitled mediocrity.”
  • An elite education gives you the chance to be rich—which is, after all, what we’re talking about—but it takes away the chance not to be. Yet the opportunity not to be rich is one of the greatest opportunities with which young Americans have been blessed. We live in a society that is itself so wealthy that it can afford to provide a decent living to whole classes of people who in other countries exist (or in earlier times existed) on the brink of poverty or, at least, of indignity. You can live comfortably in the United States as a schoolteacher, or a community organizer, or a civil rights lawyer, or an artist
  • The liberal arts university is becoming the corporate university, its center of gravity shifting to technical fields where scholarly expertise can be parlayed into lucrative business opportunities.
  • You have to live in an ordinary house instead of an apartment in Manhattan or a mansion in L.A.; you have to drive a Honda instead of a BMW or a Hummer; you have to vacation in Florida instead of Barbados or Paris, but what are such losses when set against the opportunity to do work you believe in, work you’re suited for, work you love, every day of your life? Yet it is precisely that opportunity that an elite education takes away. How can I be a schoolteacher—wouldn’t that be a waste of my expensive education?
  • Isn’t it beneath me? So a whole universe of possibility closes, and you miss your true calling.
  • This is not to say that students from elite colleges never pursue a riskier or less lucrative course after graduation, but even when they do, they tend to give up more quickly than others.
  • But if you’re afraid to fail, you’re afraid to take risks, which begins to explain the final and most damning disadvantage of an elite education: that it is profoundly anti-intellectual.
  • being an intellectual is not the same as being smart. Being an intellectual means more than doing your homework.
  • The system forgot to teach them, along the way to the prestige admissions and the lucrative jobs, that the most important achievements can’t be measured by a letter or a number or a name. It forgot that the true purpose of education is to make minds, not careers.
  • Being an intellectual means, first of all, being passionate about ideas—and not just for the duration of a semester, for the sake of pleasing the teacher, or for getting a good grade.
  • Only a small minority have seen their education as part of a larger intellectual journey, have approached the work of the mind with a pilgrim soul. These few have tended to feel like freaks, not least because they get so little support from the university itself. Places like Yale, as one of them put it to me, are not conducive to searchers. GA_googleFillSlot('Rectangle_InArticle_Right'); GA_googleCreateDomIframe("google_ads_div_Rectangle_InArticle_Right_ad_container" ,"Rectangle_InArticle_Right"); Places like Yale are simply not set up to help students ask the big questions
  • Professors at top research institutions are valued exclusively for the quality of their scholarly work; time spent on teaching is time lost. If students want a conversion experience, they’re better off at a liberal arts college.
  • When elite universities boast that they teach their students how to think, they mean that they teach them the analytic and rhetorical skills necessary for success in law or medicine or science or business.
  • Although the notion of breadth is implicit in the very idea of a liberal arts education, the admissions process increasingly selects for kids who have already begun to think of themselves in specialized terms—the junior journalist, the budding astronomer, the language prodigy. We are slouching, even at elite schools, toward a glorified form of vocational training.
  • There’s a reason elite schools speak of training leaders, not thinkers—holders of power, not its critics. An independent mind is independent of all allegiances, and elite schools, which get a large percentage of their budget from alumni giving, are strongly invested in fostering institutional loyalty.
  • At a school like Yale, students who come to class and work hard expect nothing less than an A-. And most of the time, they get it.
  • Yet there is a dimension of the intellectual life that lies above the passion for ideas, though so thoroughly has our culture been sanitized of it that it is hardly surprising if it was beyond the reach of even my most alert students. Since the idea of the intellectual emerged in the 18th century, it has had, at its core, a commitment to social transformation. Being an intellectual means thinking your way toward a vision of the good society and then trying to realize that vision by speaking truth to power.
  • It takes more than just intellect; it takes imagination and courage.
  • Being an intellectual begins with thinking your way outside of your assumptions and the system that enforces them. But students who get into elite schools are precisely the ones who have best learned to work within the system, so it’s almost impossible for them to see outside it, to see that it’s even there.
  • Paradoxically, the situation may be better at second-tier schools and, in particular, again, at liberal arts colleges than at the most prestigious universities. Some students end up at second-tier schools because they’re exactly like students at Harvard or Yale, only less gifted or driven. But others end up there because they have a more independent spirit. They didn’t get straight A’s because they couldn’t be bothered to give everything in every class. They concentrated on the ones that meant the most to them or on a single strong extracurricular passion or on projects that had nothing to do with school
  • I’ve been struck, during my time at Yale, by how similar everyone looks. You hardly see any hippies or punks or art-school types, and at a college that was known in the ’80s as the Gay Ivy, few out lesbians and no gender queers. The geeks don’t look all that geeky; the fashionable kids go in for understated elegance. Thirty-two flavors, all of them vanilla.
  • The most elite schools have become places of a narrow and suffocating normalcy. Everyone feels pressure to maintain the kind of appearance—and affect—that go with achievement
  • Now that students are in constant electronic contact, they never have trouble finding each other. But it’s not as if their compulsive sociability is enabling them to develop deep friendships.
  • What happens when busyness and sociability leave no room for solitude? The ability to engage in introspection, I put it to my students that day, is the essential precondition for living an intellectual life, and the essential precondition for introspection is solitude
  • the life of the mind is lived one mind at a time: one solitary, skeptical, resistant mind at a time. The best place to cultivate it is not within an educational system whose real purpose is to reproduce the class system.
« First ‹ Previous 41 - 60 of 145 Next › Last »
Showing 20 items per page