Skip to main content

Home/ TOK Friends/ Group items tagged randomness

Rss Feed Group items tagged

sissij

Flossing and the Art of Scientific Investigation - The New York Times - 1 views

  • the form of definitive randomized controlled trials, the so-called gold standard for scientific research.
  • Yet the notion has taken hold that such expertise is fatally subjective and that only randomized controlled trials provide real knowledge.
  • the evidence-based medicine movement, which placed such trials atop a hierarchy of scientific methods, with expert opinion situated at the bottom.
  • ...2 more annotations...
  • each of these is valuable in its own way.
  • The cult of randomized controlled trials also neglects a rich body of potential hypotheses.
  •  
    This article talks about the bias within Scientific method. As we learned in TOK, scientific method is very much based on experiments. Definitive randomized controlled trials are the gold standard for scientific research. But as argued in this article, are randomized controlled trials the only source of support that's worth believing? Advise and experience of an expert is also very important. Why can't machine completely replace the role of a doctor? That's because human are able to analysis and evaluate their experience and the patterns they recognize, but machines are only capable to organizing data, they couldn't design a unique prescription that fit with the particular patient. Expert opinion shouldn't be completely neglected and underestimate, since science always needs a leap of imagination that only human, not machines, can generate. --Sissi (1/30/2017)
Javier E

Gamblers, Scientists and the Mysterious Hot Hand - The New York Times - 0 views

  • Psychologists who study how the human mind responds to randomness call this the gambler’s fallacy — the belief that on some cosmic plane a run of bad luck creates an imbalance that must ultimately be corrected, a pressure that must be relieved
  • The opposite of that is the hot-hand fallacy — the belief that winning streaks, whether in basketball or coin tossing, have a tendency to continue
  • Both misconceptions are reflections of the brain’s wired-in rejection of the power that randomness holds over our lives. Look deep enough, we instinctively believe, and we may uncover a hidden order.
  • ...11 more annotations...
  • A working paper published this summer has caused a stir by proposing that a classic body of research disproving the existence of the hot hand in basketball is flawed by a subtle misperception about randomness. If the analysis is correct, the possibility remains that the hot hand is real.
  • We mortals can benefit, at least in theory, from islands of predictability — a barely perceptible tilt of a roulette table that makes the ball slightly more likely to land on one side of the wheel than the other
  • The same is true for the random walk of the stock market. Becoming aware of information before it has propagated worldwide can give a speculator a tiny, temporary edge. Some traders pay a premium to locate their computer servers as close as possible to Lower Manhattan, gaining advantages measured in microseconds.
  • Taken to extremes, seeing connections that don’t exist can be a symptom of a psychiatric condition called apophenia. In less pathological forms, the brain’s hunger for pattern gives rise to superstitions (astrology, numerology) and is a driving factor in what has been called a replication crisis in science
  • I know it sounds crazy but when you average the scores together the answer is not 50-50, as most people would expect, but about 40-60 in favor of tails.
  • There is not, as Guildenstern might imagine, a tear in the fabric of space-time. It remains as true as ever that each flip is independent, with even odds that the coin will land one way or the other. But by concentrating on only some of the data — the flips that follow heads — a gambler falls prey to a selection bias.
  • basketball is no streakier than a coin toss. For a 50 percent shooter, for example, the odds of making a basket are supposed to be no better after a hit — still 50-50. But in a purely random situation, according to the new analysis, a hit would be expected to be followed by another hit less than half the time. Finding 50 percent would actually be evidence in favor of the hot hand
  • Dr. Gilovich is withholding judgment. “The larger the sample of data for a given player, the less of an issue this is,” he wrote in an email. “Because our samples were fairly large, I don’t believe this changes the original conclusions about the hot hand. ”
  • Take a fair coin — one as likely to land on heads as tails — and flip it four times. How often was heads followed by another head?
  • For all their care to be objective, scientists are as prone as anyone to valuing data that support their hypothesis over those that contradict it. Sometimes this results in experiments that succeed only under very refined conditions, in certain labs with special reagents and performed by a scientist with a hot hand.
  • We’re all in the same boat. We evolved with this uncanny ability to find patterns. The difficulty lies in separating what really exists from what is only in our minds.
Emilio Ergueta

Lessons from Gaming #2: Random Universe | Talking Philosophy - 0 views

  • My experiences as a tabletop and video gamer have taught me numerous lessons that are applicable to the real world (assuming there is such a thing). One key skill in getting about in reality is the ability to model reality.
  • Many games, such as Call of Cthulhu, D&D, Pathfinder and Star Fleet Battles make extensive use of dice to model the vagaries of reality.
  • Being a gamer, it is natural for me to look at reality as also being random—after all, if a random model (gaming system) nicely fits aspects of reality, then that suggests the model has things right. As such, I tend to think of this as being a random universe in which God (or whatever) plays dice with us.
  • ...6 more annotations...
  • I do not know if the universe is random (contains elements of chance). After all, we tend to attribute chance to the unpredictable, but this unpredictability might be a matter of ignorance rather than chance.
  • even if things could have been different it does not follow that chance is real. After all, chance is not the only thing that could make a difference.
  • Obviously, there is no way to prove that choice occurs—as with chance versus determinism, without simply knowing the brute fact about choice there is no way to know whether the universe allows for choice or not.
  • : because of chance, the results of any choice cannot be known with certainty
  • if things can fail or go wrong because of chance, then it makes sense to be more forgiving and understanding of failure—at least when the failure can be attributed in part to chance.
  • the role of chance in success and failure should be considered when planning and creating policies.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
Javier E

Eric A. Posner Reviews Jim Manzi's "Uncontrolled" | The New Republic - 0 views

  • Most urgent questions of public policy turn on empirical imponderables, and so policymakers fall back on ideological predispositions or muddle through. Is there a better way?
  • The gold standard for empirical research is the randomized field trial (RFT).
  • The RFT works better than most other types of empirical investigation. Most of us use anecdotes or common sense empiricism to make inferences about the future, but psychological biases interfere with the reliability of these methods
  • ...15 more annotations...
  • Serious empiricists frequently use regression analysis.
  • Regression analysis is inferior to RFT because of the difficulty of ruling out confounding factors (for example, that a gene jointly causes baldness and a preference for tight hats) and of establishing causation
  • RFT has its limitations as well. It is enormously expensive because you must (usually) pay a large number of people to participate in an experiment, though one can obtain a discount if one uses prisoners, especially those in a developing country. In addition, one cannot always generalize from RFTs.
  • academic research proceeds in fits and starts, using RFT when it can, but otherwise relying on regression analysis and similar tools, including qualitative case studies,
  • businesses also use RFT whenever they can. A business such as Wal-Mart, with thousands of stores, might try out some innovation like a new display in a random selection of stores, using the remaining stores as a control group
  • Manzi argues that the RFT—or more precisely, the overall approach to empirical investigation that the RFT exemplifies—provides a way of thinking about public policy. Thi
  • the universe is shaky even where, as in the case of physics, “hard science” plays the dominant role. The scientific method cannot establish truths; it can only falsify hypotheses. The hypotheses come from our daily experience, so even when science prunes away intuitions that fail the experimental method, we can never be sure that the theories that remain standing reflect the truth or just haven’t been subject to the right experiment. And even within its domain, the experimental method is not foolproof. When an experiment contradicts received wisdom, it is an open question whether the wisdom is wrong or the experiment was improperly performed.
  • The book is less interested in the RFT than in the limits of empirical knowledge. Given these limits, what attitude should we take toward government?
  • Much of scientific knowledge turns out to depend on norms of scientific behavior, good faith, convention, and other phenomena that in other contexts tend to provide an unreliable basis for knowledge.
  • Under this view of the world, one might be attracted to the cautious conservatism associated with Edmund Burke, the view that we should seek knowledge in traditional norms and customs, which have stood the test of time and presumably some sort of Darwinian competition—a human being is foolish, the species is wise. There are hints of this worldview in Manzi’s book, though he does not explicitly endorse it. He argues, for example, that we should approach social problems with a bias for the status quo; those who seek to change it carry the burden of persuasion. Once a problem is identified, we should try out our ideas on a small scale before implementing them across society
  • Pursuing the theme of federalism, Manzi argues that the federal government should institutionalize policy waivers, so states can opt out from national programs and pursue their own initiatives. A state should be allowed to opt out of federal penalties for drug crimes, for example.
  • It is one thing to say, as he does, that federalism is useful because we can learn as states experiment with different policies. But Manzi takes away much of the force of this observation when he observes, as he must, that the scale of many of our most urgent problems—security, the economy—is at the national level, so policymaking in response to these problems cannot be left to the states. He also worries about social cohesion, which must be maintained at a national level even while states busily experiment. Presumably, this implies national policy of some sort
  • Manzi’s commitment to federalism and his technocratic approach to policy, which relies so heavily on RFT, sit uneasily together. The RFT is a form of planning: the experimenter must design the RFT and then execute it by recruiting subjects, paying them, and measuring and controlling their behavior. By contrast, experimentation by states is not controlled: the critical element of the RFT—randomization—is absent.
  • The right way to go would be for the national government to conduct experiments by implementing policies in different states (or counties or other local units) by randomizing—that is, by ordering some states to be “treatment” states and other states to be “control” states,
  • Manzi’s reasoning reflects the top-down approach to social policy that he is otherwise skeptical of—although, to be sure, he is willing to subject his proposals to RFTs.
Javier E

When Roommates Were Random - NYTimes.com - 0 views

  • We tend to value order and control over randomness, but when we lose randomness, we also lose serendipity.
  • there are, in fact, long-lasting effects of whom you end up living with your first year.
Javier E

UK mathematician wins richest prize in academia | Mathematics | The Guardian - 0 views

  • Martin Hairer, an Austrian-British researcher at Imperial College London, is the winner of the 2021 Breakthrough prize for mathematics, an annual $3m (£2.3m) award that has come to rival the Nobels in terms of kudos and prestige.
  • Hairer landed the prize for his work on stochastic analysis, a field that describes how random effects turn the maths of things like stirring a cup of tea, the growth of a forest fire, or the spread of a water droplet that has fallen on a tissue into a fiendishly complex problem.
  • His major work, a 180-page treatise that introduced the world to “regularity structures”, so stunned his colleagues that one suggested it must have been transmitted to Hairer by a more intelligent alien civilisation.
  • ...3 more annotations...
  • After dallying with physics at university, Hairer moved into mathematics. The realisation that ideas in theoretical physics can be overturned and swiftly consigned to the dustbin did not appeal. “I wouldn’t really want to put my name to a result that could be superseded by something else three years later,” he said. “In mathematics, if you obtain a result then that is it. It’s the universality of mathematics, you discover absolute truths.”
  • Hairer’s expertise lies in stochastic partial differential equations, a branch of mathematics that describes how randomness throws disorder into processes such as the movement of wind in a wind tunnel or the creeping boundary of a water droplet landing on a tissue. When the randomness is strong enough, solutions to the equations get out of control. “In some cases, the solutions fluctuate so wildly that it is not even clear what the equation meant in the first place,” he said.
  • With the invention of regularity structures, Hairer showed how the infinitely jagged noise that threw his equations into chaos could be reframed and tamed.
katieb0305

Are You Successful? If So, You've Already Won the Lottery - The New York Times - 0 views

  • Chance events play a much larger role in life than many people once imagined.
  • But randomness often plays out in subtle ways, and it’s easy to construct narratives that portray success as having been inevitable.
  • In the years since, the painting has come to represent Western culture itself. Yet had it never been stolen, most of us would know no more about it than we do of the two obscure Leonardo da Vinci canvases from the same period that hang in an adjacent gallery at the Louvre.
  • ...8 more annotations...
  • Inevitably, some of those initial steps will have been influenced by seemingly trivial random events. So it is reasonable to conclude that virtually all successful careers entail at least a modicum of luck.
  • One’s date of birth can matter enormously, for example. According to a 2008 study, most children born in the summer tend to be among the youngest members of their class at school, which appears to explain why they are significantly less likely to hold leadership positions during high school and thus, another study indicates, less likely to land premium jobs later in life. Similarly, according to research published in the journal Economics Letters in 2012, the number of American chief executives who were born in June and July is almost one-third lower than would be expected on the basis of chance alone.
  • To acknowledge the importance of random events is not to suggest that success is independent of talent and effort. In highly competitive arenas, those who do well are almost always extremely talented and hard-working.
  • Such expertise comes not from luck but from thousands of hours of assiduous effort.
  • Being born in a good environment is one of the few dimensions of luck we can control — that is, at least we can decide how lucky our children will be.
  • The unlucky population is growing, and its luck is getting worse.
  • Evidence from the social sciences demonstrates that beyond a certain income threshold, people’s sense of well-being depends much more on their relative purchasing power than on how much they spend in absolute terms. If top tax rates were a little higher, all homes would be a little smaller, all cars a little less expensive, all diamonds a little more modest and all celebrations a little less costly. The standards that define “special” would adjust accordingly, leaving most successful people quite satisfied.
  • Merely prompting people to reflect on their good fortune tends to make them more willing to contribute to the common good, according to a 2010 study published in the journal Emotion.
Javier E

Lies, Damned Lies, and Medical Science - Magazine - The Atlantic - 0 views

  • He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community
  • for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem
  • he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals
  • ...5 more annotations...
  • “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
  • Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time.
  • if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
  • He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals
  • Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.
knudsenlu

Hawaii: Where Evolution Can Be Surprisingly Predictable - The Atlantic - 0 views

  • Situated around 2,400 miles from the nearest continent, the Hawaiian Islands are about as remote as it’s possible for islands to be. In the last 5 million years, they’ve been repeatedly colonized by far-traveling animals, which then diversified into dozens of new species. Honeycreeper birds, fruit flies, carnivorous caterpillars ... all of these creatures reached Hawaii, and evolved into wondrous arrays of unique forms.
  • The most spectacular of these spider dynasties, Gillespie says, are the stick spiders. They’re so-named because some of them have long, distended abdomens that make them look like twigs. “You only see them at night, walking around the understory very slowly,” Gillespie says. “They’re kind of like sloths.” Murderous sloths, though: Their sluggish movements allow them to sneak up on other spiders and kill them.
  • Gillespie has shown that the gold spiders on Oahu belong to a different species from those on Kauai or Molokai. In fact, they’re more closely related to their brown and white neighbors from Oahu. Time and again, these spiders have arrived on new islands and evolved into new species—but always in one of three basic ways. A gold spider arrives on Oahu, and diversified into gold, brown, and white species. Another gold spider hops across to Maui and again diversified into gold, brown, and white species. “They repeatedly evolve the same forms,” says Gillespie.
  • ...3 more annotations...
  • Gillespie has seen this same pattern before, among Hawaii’s long-jawed goblin spiders. Each island has its own representatives of the four basic types: green, maroon, small brown, and large brown. At first, Gillespie assumed that all the green species were related to each other. But the spiders’ DNA revealed that the ones that live on the same islands are most closely related, regardless of their colors. They too have hopped from one island to another, radiating into the same four varieties wherever they land.
  • One of the most common misunderstandings about evolution is that it is a random process. Mutations are random, yes, but those mutations then rise and fall in ways that are anything but random. That’s why stick spiders, when they invade a new island, don’t diversify into red species, or zebra-striped ones. The environment of Hawaii sculpts their bodies in a limited number of ways.
  • Gillespie adds that there’s an urgency to this work. For millions of years, islands like Hawaii have acted as crucibles of evolution, allowing living things to replay evolution’s tape in the way that Gould envisaged. But in a much shorter time span, humans have threatened the results of those natural experiments. “The Hawaiian islands are in dire trouble from invasive species, and environmental modifications,” says Gillespie. “And you have all these unknown groups of spiders—entire lineages of really beautiful, charismatic animals, most of which are undescribed.”
Javier E

Strange things are taking place - at the same time - 0 views

  • In February 1973, Dr. Bernard Beitman found himself hunched over a kitchen sink in an old Victorian house in San Francisco, choking uncontrollably. He wasn’t eating or drinking, so there was nothing to cough up, and yet for several minutes he couldn’t catch his breath or swallow.The next day his brother called to tell him that 3,000 miles away, in Wilmington, Del., their father had died. He had bled into his throat, choking on his own blood at the same time as Beitman’s mysterious episode.
  • Overcome with awe and emotion, Beitman became fascinated with what he calls meaningful coincidences. After becoming a professor of psychiatry at the University of Missouri-Columbia, he published several papers and two books on the subject and started a nonprofit, the Coincidence Project, to encourage people to share their coincidence stories.
  • “What I look for as a scientist and a spiritual seeker are the patterns that lead to meaningful coincidences,” said Beitman, 80, from his home in Charlottesville, Va. “So many people are reporting this kind of experience. Understanding how it happens is part of the fun.”
  • ...20 more annotations...
  • Beitman defines a coincidence as “two events coming together with apparently no causal explanation.” They can be life-changing, like his experience with his father, or comforting, such as when a loved one’s favorite song comes on the radio just when you are missing them most.
  • Although Beitman has long been fascinated by coincidences, it wasn’t until the end of his academic career that he was able to study them in earnest. (Before then, his research primarily focused on the relationship between chest pain and panic disorder.)
  • He started by developing the Weird Coincidence Survey in 2006 to assess what types of coincidences are most commonly observed, what personality types are most correlated with noticing them and how most people explain them. About 3,000 people have completed the survey so far.
  • he has drawn a few conclusions. The most commonly reported coincidences are associated withmass media: A person thinks of an idea and then hears or sees it on TV, the radio or the internet. Thinking of someone and then having that person call unexpectedly is next on the list, followed by being in the right place at the right time to advance one’s work, career or education.
  • People who describe themselves as spiritual or religious report noticing more meaningful coincidences than those who do not, and people are more likely to experience coincidences when they are in a heightened emotional state — perhaps under stress or grieving.
  • The most popular explanation among survey respondents for mysterious coincidences: God or fate. The second explanation: randomness. The third is that our minds are connected to one another. The fourth is that our minds are connected to the environment.
  • “Some say God, some say universe, some say random and I say ‘Yes,’ ” he said. “People want things to be black and white, yes or no, but I say there is mystery.”
  • He’s particularly interested in what he’s dubbed “simulpathity”: feeling a loved one’s pain at a distance, as he believes he did with his father. Science can’t currently explain how it might occur, but in his books he offers some nontraditional ideas, such as the existence of “the psychosphere,” a kind of mental atmosphere through which information and energy can travel between two people who are emotionally close though physically distant.
  • In his new book published in September, “Meaningful Coincidences: How and Why Synchronicity and Serendipity Happen,” he shares the story of a young man who intended to end his life by the shore of an isolated lake. While he sat crying in his car, another car pulled up and his brother got out. When the young man asked for an explanation, the brother said he didn’t know why he got in the car, where he was going, or what he would do when he got there. He just knew he needed to get in the car and drive.
  • David Hand, a British statistician and author of the 2014 book “The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day,” sits at the opposite end of the spectrum from Beitman. He says most coincidences are fairly easy to explain, and he specializes in demystifying even the strangest ones.
  • “When you look closely at a coincidence, you can often discover the chance of it happening is not as small as you think,” he said. “It’s perhaps not a 1-in-a-billion chance, but in fact a 1-in-a-hundred chance, and yeah, you would expect that would happen quite often.”
  • the law of truly large numbers. “You take something that has a very small chance of happening and you give it lots and lots and lots of opportunities to happen,” he said. “Then the overall probability becomes big.”
  • But just because Hand has a mathematical perspective doesn’t mean he finds coincidences boring. “It’s like looking at a rainbow,” he said. “Just because I understand the physics behind it doesn’t make it any the less wonderful.
  • Paying attention to coincidences, Osman and Johansen say, is an essential part of how humans make sense of the world. We rely constantly on our understanding of cause and effect to survive.
  • “Coincidences are often associated with something mystical or supernatural, but if you look under the hood, noticing coincidences is what humans do all the time,”
  • Zeltzer has spent 50 years studying the writings of Carl Jung, the 20th century Swiss psychologist who introduced the modern Western world to the idea of synchronicity. Jung defined synchronicity as “the coincidence in time of two or more causally unrelated events which have the same meaning.”
  • One of Jung’s most iconic synchronistic stories concerned a patient who he felt had become so stuck in her rationality that it interfered with her ability to understand her psychology and emotional life.
  • One day, the patient was recounting a dream in which she’d received a golden scarab. Just then, Jung heard a gentle tapping at the window. He opened the window and a scarab-like beetle flew into the room. Jung plucked the insect out of the air and presented it to his patient. “Here is your scarab,” he said.The experience proved therapeutic because it demonstrated to Jung’s patient that the world is not always rational, leading her to break her own identification with rationality and thus become more open to her emotional life, Zeltzer explained
  • Like Jung, Zeltzer believes meaningful coincidences can encourage people to acknowledge the irrational and mysterious. “We have a fantasy that there is always an answer, and that we should know everything,”
  • Honestly, I’m not sure what to believe, but I’m not sure it matters. Like Beitman, my attitude is “Yes.”
Javier E

Nobel Prize in Physics Is Awarded to 3 Scientists for Work Exploring Quantum Weirdness ... - 0 views

  • “We’re used to thinking that information about an object — say that a glass is half full — is somehow contained within the object.” Instead, he says, entanglement means objects “only exist in relation to other objects, and moreover these relationships are encoded in a wave function that stands outside the tangible physical universe.”
  • Einstein, though one of the founders of quantum theory, rejected it, saying famously, God did not play dice with the universe.In a 1935 paper written with Boris Podolsky and Nathan Rosen, he tried to demolish quantum mechanics as an incomplete theory by pointing out that by quantum rules, measuring a particle in one place could instantly affect measurements of the other particle, even if it was millions of miles away.
  • Dr. Clauser, who has a knack for electronics and experimentation and misgivings about quantum theory, was the first to perform Bell’s proposed experiment. He happened upon Dr. Bell’s paper while a graduate student at Columbia University and recognized it as something he could do.
  • ...13 more annotations...
  • In 1972, using duct tape and spare parts in the basement on the campus of the University of California, Berkeley, Dr. Clauser and a graduate student, Stuart Freedman, who died in 2012, endeavored to perform Bell’s experiment to measure quantum entanglement. In a series of experiments, he fired thousands of light particles, or photons, in opposite directions to measure a property known as polarization, which could have only two values — up or down. The result for each detector was always a series of seemingly random ups and downs. But when the two detectors’ results were compared, the ups and downs matched in ways that neither “classical physics” nor Einstein’s laws could explain. Something weird was afoot in the universe. Entanglement seemed to be real.
  • in 2002, Dr. Clauser admitted that he himself had expected quantum mechanics to be wrong and Einstein to be right. “Obviously, we got the ‘wrong’ result. I had no choice but to report what we saw, you know, ‘Here’s the result.’ But it contradicts what I believed in my gut has to be true.” He added, “I hoped we would overthrow quantum mechanics. Everyone else thought, ‘John, you’re totally nuts.’”
  • the correlations only showed up after the measurements of the individual particles, when the physicists compared their results after the fact. Entanglement seemed real, but it could not be used to communicate information faster than the speed of light.
  • In 1982, Dr. Aspect and his team at the University of Paris tried to outfox Dr. Clauser’s loophole by switching the direction along which the photons’ polarizations were measured every 10 nanoseconds, while the photons were already in the air and too fast for them to communicate with each other. He too, was expecting Einstein to be right.
  • Quantum predictions held true, but there were still more possible loopholes in the Bell experiment that Dr. Clauser had identified
  • For example, the polarization directions in Dr. Aspect’s experiment had been changed in a regular and thus theoretically predictable fashion that could be sensed by the photons or detectors.
  • Anton Zeilinger
  • added even more randomness to the Bell experiment, using random number generators to change the direction of the polarization measurements while the entangled particles were in flight.
  • Once again, quantum mechanics beat Einstein by an overwhelming margin, closing the “locality” loophole.
  • as scientists have done more experiments with entangled particles, entanglement is accepted as one of main features of quantum mechanics and is being put to work in cryptology, quantum computing and an upcoming “quantum internet
  • One of its first successes in cryptology is messages sent using entangled pairs, which can send cryptographic keys in a secure manner — any eavesdropping will destroy the entanglement, alerting the receiver that something is wrong.
  • , with quantum mechanics, just because we can use it, doesn’t mean our ape brains understand it. The pioneering quantum physicist Niels Bohr once said that anyone who didn’t think quantum mechanics was outrageous hadn’t understood what was being said.
  • In his interview with A.I.P., Dr. Clauser said, “I confess even to this day that I still don’t understand quantum mechanics, and I’m not even sure I really know how to use it all that well. And a lot of this has to do with the fact that I still don’t understand it.”
Emily Horwitz

¿Por qué tosemos más en los conciertos de música clásica? - BBC Mundo - Noticias - 0 views

  • Todo está en silencio. Los instrumentos de cuerda, los de viento y percusión esperan la señal del director para empezar la pieza. Al otro lado está el público callado, tragando más espeso y conteniendo la tos. Hay alguien que no lo puede evitar y con el primer acorde empieza a toser. ¿Por qué siempre ocurre esto?
  • "Toda la estadística existente sugiere que la gente tose dos veces más durante los conciertos", le dijo Wagener a la BBC.
  • El especialista descubrió que la acción de toser no es completamente aleatoria. La pieza que se escucha también incita a toser más o menos.
  • ...8 more annotations...
  • "Si se trata de conciertos más modernos, como por ejemplo música clásica del siglo XX, los movimientos más lentos y los silencios son interrumpidos con mayor frecuencia".
  • cuando alguien empieza a toser y contagia a los otros.
  • "Creo que muchas personas cuando van a conciertos clásicos se dan cuenta que el nivel de ruido es mucho menor que la música a la que están acostumbradas a oir a través de sus auriculares o conciertos de música pop", agregó la pianista.
  • ese silencio en los conciertos acústicos es reconfortante, para otros puede originar inconformidad que se manifiesta en la acción de toser.
  • Andreas Wagener se mostró parcialmente de acuerdo con la teoría de Tomes, pues "cuando alguien va a un concierto (de música clásica) sabe que debe permanecer en silencio".
  • "Es una cuestión de etiqueta, saben que no deben hablar o caminar, hacer ruido o toser, pero la gente sigue tosiendo en exceso".
  • con la tos no se puede saber si es deliberado o involuntario.
  • "Creo que a veces la gente no esta consciente de como suena para el concertista. Es un factor muy distractor".
  •  
    I realize that this article is in Spanish, so those who don't understand the language will likely be confused, but I thought that it was very interesting, and related to TOK. Essentially, the article talked about a study that Andreas Wagener, a German scientist did, in which it was discovered that people cough twice as much at classical music concerts than otherwise. Wagener also found that the amount of coughing was not random; rather, it was dependent on the style, tempo, etc. of the music being played. The slow, more modern pieces often elicited more coughs. Additionally, Wagener found that, similar to how we think about yawning, coughing is contagious; one cough can cause an avalanche of other coughs. The article also noted the possibility that some of the coughing going on during a classical music concert may not be the typical, involuntary, reflexive cough, but a deliberate cough of social interaction. In terms of TOK, I thought that this article was most interesting in that, when put into a situation in which we may be uncomfortable (often with silence), we cough more. I related this to my own experiences at Friends, during MFW, when people often seem to cough out of a need for interaction. It would be interesting to see if Wagener could work with some geneticists and biologists to discover if a connection between slow classical music and more coughing is purely biological, or if it stems from another causation of human behavior.
sissij

How Murphy's Law Works - HowStuffWorks - 0 views

  • You're sitting in eight lanes of bumper-to-bumper traffic. You're more than ready to get home, but you notice, to your great dismay, that all of the other lanes seem to be moving. You change lanes. But once you do, the cars in your new lane come to a dead halt. At a standstill, you notice every lane on the highway (including the one you just left) is moving -- except yours.
  • whatever can go wrong will go wrong.
  • After all, we expect that things should work out in our favor. But when things go badly, we look for reasons.
  • ...1 more annotation...
  • It seems to poke fun at us for being such hotheads, and it uses the rules of probability -- the mathematical likeliness that something will occur -- to support itself.
  •  
    I found this law very interesting because it is related to our topic of our lack of ability to deal with probabilities. We always tends to notice the bad things and rare things that happen to us and we named it coincidence. However, by math, coincidence is not only limited to things that rarely happen; it should also include things that has a high probability to happen. In the reading we did on Lesson Five, it mentions that we tend to think the cluster is not random, so we only find patterns between rarely happened things. For example, the death of two America presidents Lincoln and Kennedy. But what about the presidents who make through their presidency alive? Is it be a coincidence that they all live through their presidency? --Sissi (11/18/2016)
Javier E

Breathing In vs. Spacing Out - NYTimes.com - 0 views

  • Although pioneers like Jon Kabat-Zinn, now emeritus professor at the University of Massachusetts Medical Center, began teaching mindfulness meditation as a means of reducing stress as far back as the 1970s, all but a dozen or so of the nearly 100 randomized clinical trials have been published since 2005.
  • Michael Posner, of the University of Oregon, and Yi-Yuan Tang, of Texas Tech University, used functional M.R.I.’s before and after participants spent a combined 11 hours over two weeks practicing a form of mindfulness meditation developed by Tang. They found that it enhanced the integrity and efficiency of the brain’s white matter, the tissue that connects and protects neurons emanating from the anterior cingulate cortex, a region of particular importance for rational decision-making and effortful problem-solving.
  • Perhaps that is why mindfulness has proved beneficial to prospective graduate students. In May, the journal Psychological Science published the results of a randomized trial showing that undergraduates instructed to spend a mere 10 minutes a day for two weeks practicing mindfulness made significant improvement on the verbal portion of the Graduate Record Exam — a gain of 16 percentile points. They also significantly increased their working memory capacity, the ability to maintain and manipulate multiple items of attention.
  • ...7 more annotations...
  • By emphasizing a focus on the here and now, it trains the mind to stay on task and avoid distraction.
  • “Your ability to recognize what your mind is engaging with, and control that, is really a core strength,” said Peter Malinowski, a psychologist and neuroscientist at Liverpool John Moores University in England. “For some people who begin mindfulness training, it’s the first time in their life where they realize that a thought or emotion is not their only reality, that they have the ability to stay focused on something else, for instance their breathing, and let that emotion or thought just pass by.”
  • the higher adults scored on a measurement of mindfulness, the worse they performed on tests of implicit learning — the kind that underlies all sorts of acquired skills and habits but that occurs without conscious awareness.
  • he found that having participants spend a brief period of time on an undemanding task that maximizes mind wandering improved their subsequent performance on a test of creativity. In a follow-up study, he reported that physicists and writers alike came up with their most insightful ideas while spacing out.
  • The trick is knowing when mindfulness is called for and when it’s not.
  • one of the most surprising findings of recent mindfulness studies is that it could have unwanted side effects. Raising roadblocks to the mind’s peregrinations could, after all, prevent the very sort of mental vacations that lead to epiphanies.
  • “There’s so much our brain is doing when we’re not aware of it,” said the study’s leader, Chelsea Stillman, a doctoral candidate. “We know that being mindful is really good for a lot of explicit cognitive functions. But it might not be so useful when you want to form new habits.” Learning to ride a bicycle, speak grammatically or interpret the meaning of people’s facial expressions are three examples of knowledge we acquire through implicit learning
Javier E

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
Javier E

Book Club: A Guide To Living « The Dish - 0 views

  • He proves nothing that he doesn’t simultaneously subvert a little; he makes no over-arching argument about the way humans must live; he has no logician’s architecture or religious doctrine. He slips past all those familiar means of telling other people what’s good for them, and simply explains what has worked for him and others and leaves the reader empowered to forge her own future
  • You can see its eccentric power by considering the alternative ways of doing what Montaigne was doing. Think of contemporary self-help books – and all the fake certainty and rigid formulae they contain. Or think of a hideous idea like “the purpose-driven life” in which everything must be forced into the box of divine guidance in order to really live at all. Think of the stringency of Christian disciplines – say, the spiritual exercises of Ignatius of Loyola – and marvel at how Montaigne offers an entirely different and less compelling way to live. Think of the rigidity of Muslim practice and notice how much lee-way Montaigne gives to sin
  • This is a non-philosophical philosophy. It is a theory of practical life as told through one man’s random and yet not-so-random reflections on his time on earth. And it is shot through with doubt. Even the maxims that Montaigne embraces for living are edged with those critical elements of Montaigne’s thought that say “as far as I know”
  • ...4 more annotations...
  • Is this enough? Or is it rather a capitulation to relativism, a manifesto for political quietism, a worldview that treats injustice as something to be abhorred but not constantly fought against? This might be seen as the core progressive objection to the way of Montaigne. Or is his sensibility in an age of religious terror and violence and fanaticism the only ultimate solution we have?
  • here’s what we do know. We are fallible beings; we have nothing but provisional knowledge; and we will die. And this is enough. This does not mean we should give up inquiring or seeking to understand. Skepticism is not nihilism. It doesn’t posit that there is no truth; it merely notes that if truth exists, it is inherently beyond our ultimate grasp. And accepting those limits is the first step toward sanity, toward getting on with life. This is what I mean by conservatism.
  • you can find in philosophy any number of clues about how to live; you can even construct them into an ideology that explains all of human life and society – like Marxism or free market fundamentalism or a Nietzschean will to power. But as each totalist system broke down upon my further inspection, I found myself returning to Montaigne and the tradition of skepticism he represents
  • If I were to single out one theme of Montaigne’s work that has stuck with me, it would be this staring of death in the face, early and often, and never flinching. It is what our culture refuses to do much of the time, thereby disempowering us in the face of our human challenges.
Javier E

Proofiness - Charles Seife - NYTimes.com - 1 views

  • From school days, we are trained to treat numbers as platonic, perfect objects. They are the closest we get to absolute truth. Two plus two always equals four. Numbers in the abstract are pure, perfect creatures. The numbers we deal with in the real world are different. They’re created by humans. And we humans are fallible. Our measurements have errors. Our research misses stuff, and we lie sometimes. The numbers we create aren’t perfect platonic ideals
  • We’re hard wired to reject the idea that there’s no reason for something happening. This is how Las Vegas makes its money. You’ll have people at the craps table thinking they’re set for a winning streak because they’ve been losing. And you’ll have people who have been winning so they think they’ll keep winning. Neither is true.
  • Randumbness is our stupidity about true randomness. We are unable to accept the fact that there’s not a pattern in certain things, so we project our own beliefs and patterns on data, which is pattern-free.
1 - 20 of 90 Next › Last »
Showing 20 items per page