Skip to main content

Home/ TOK Friends/ Group items matching "reliability" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Reasons for Reason - NYTimes.com - 0 views

  • Rick Perry’s recent vocal dismissals of evolution, and his confident assertion that “God is how we got here” reflect an obvious divide in our culture.
  • underneath this divide is a deeper one. Really divisive disagreements are typically not just over the facts. They are also about the best way to support our views of the facts. Call this a disagreement in epistemic principle. Our epistemic principles tell us what is rational to believe, what sources of information to trust.
  • I suspect that for most people, scientific evidence (or its lack) has nothing to do with it. Their belief in creationism is instead a reflection of a deeply held epistemic principle: that, at least on some topics, scripture is a more reliable source of information than science.  For others, including myself, this is never the case.
  • ...17 more annotations...
  • appealing to another method won’t help either — for unless that method can be shown to be reliable, using it to determine the reliability of the first method answers nothing.
  • Every one of our beliefs is produced by some method or source, be it humble (like memory) or complex (like technologically assisted science). But why think our methods, whatever they are, are trustworthy or reliable for getting at the truth? If I challenge one of your methods, you can’t just appeal to the same method to show that it is reliable. That would be circular
  • How do we rationally defend our most fundamental epistemic principles? Like many of the best philosophical mysteries, this a problem that can seem both unanswerable and yet extremely important to solve.
  • it seems to suggest that in the end, all “rational” explanations end up grounding out on something arbitrary. It all just comes down to what you happen to believe, what you feel in your gut, your faith.  Human beings have historically found this to be a very seductive idea,
  • this is precisely the situation we seem to be headed towards in the United States. We live isolated in our separate bubbles of information culled from sources that only reinforce our prejudices and never challenge our basic assumptions. No wonder that — as in the debates over evolution, or what to include in textbooks illustrate — we so often fail to reach agreement over the history and physical structure of the world itself. No wonder joint action grinds to a halt. When you can’t agree on your principles of evidence and rationality, you can’t agree on the facts. And if you can’t agree on the facts, you can hardly agree on what to do in the face of the facts.
  • We can’t decide on what counts as a legitimate reason to doubt my epistemic principles unless we’ve already settled on our principles—and that is the very issue in question.
  • The problem that skepticism about reason raises is not about whether I have good evidence by my principles for my principles. Presumably I do.[1] The problem is whether I can give a more objective defense of them. That is, whether I can give reasons for them that can be appreciated from what Hume called a “common point of view” — reasons that can “move some universal principle of the human frame, and touch a string, to which all mankind have an accord and symphony.”[2]
  • Any way you go, it seems you must admit you can give no reason for trusting your methods, and hence can give no reason to defend your most fundamental epistemic principles.
  • So one reason we should take the project of defending our epistemic principles seriously is that the ideal of civility demands it.
  • there is also another, even deeper, reason. We need to justify our epistemic principles from a common point of view because we need shared epistemic principles in order to even have a common point of view. Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • democracies aren’t simply organizing a struggle for power between competing interests; democratic politics isn’t war by other means. Democracies are, or should be, spaces of reasons.
  • we need an epistemic common currency because we often have to decide, jointly, what to do in the face of disagreement.
  • Sometimes we can accomplish this, in a democratic society, by voting. But we can’t decide every issue that way
  • We need some forms of common currency before we get to the voting booth.
  • Even if, as the skeptic says, we can’t defend the truth of our principles without circularity, we might still be able to show that some are better than others. Observation and experiment, for example, aren’t just good because they are reliable means to the truth. They are valuable because almost everyone can appeal to them. They have roots in our natural instincts, as Hume might have said.
  • that is one reason we need to resist skepticism about reason: we need to be able to give reasons for why some standards of reasons — some epistemic principles — should be part of that currency and some not.
  • Reasons for Reason By MICHAEL P. LYNCH
Javier E

How Reliable Are the Social Sciences? - NYTimes.com - 3 views

  • media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention. But this is hardly a reasonable view.  There is considerable distance between, say, the confidence we should place in astronomers’ calculations of eclipses and a small marketing study suggesting that consumers prefer laundry soap in blue boxes
  • A rational assessment of a scientific result must first take account of the broader context of the particular science involved.  Where does the result lie on the continuum from preliminary studies, designed to suggest further directions of research, to maximally supported conclusions of the science?
  • Second, and even more important, there is our overall assessment of work in a given science in comparison with other sciences.
  • ...12 more annotations...
  • The core natural sciences (e.g., physics, chemistry, biology) are so well established that we readily accept their best-supported conclusions as definitive.
  • Even the best-developed social sciences like economics have nothing like this status.
  • when it comes to generating reliable scientific knowledge, there is nothing more important than frequent and detailed predictions of future events.  We may have a theory that explains all the known data, but that may be just the result of our having fitted the theory to that data.  The strongest support for a theory comes from its ability to correctly predict data that it was not designed to explain.
  • The case for a negative answer lies in the predictive power of the core natural sciences compared with even the most highly developed social sciences
  • Is there any work on the effectiveness of teaching that is solidly enough established to support major policy decisions?
  • While the physical sciences produce many detailed and precise predictions, the social sciences do not. 
  • most social science research falls far short of the natural sciences’ standard of controlled experiments.
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive.
  • Because of the many interrelated causes at work in social systems, many questions are simply “impervious to experimentation.”
  • even when we can get reliable experimental results, the causal complexity restricts us to “extremely conditional, statistical statements,” which severely limit the range of cases to which the results apply.
  • above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
kaylynfreeman

How Reliable Are the Social Sciences? - The New York Times - 1 views

  • How much authority should we give to such work in our policy decisions?  The question is important because media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention.
  • A rational assessment of a scientific result must first take account of the broader context of the particular science involved.  Where does the result lie on the continuum from preliminary studies, designed to suggest further directions of research, to maximally supported conclusions of the science? 
  • Second, and even more important, there is our overall assessment of work in a given science in comparison with other sciences.  The core natural sciences (e.g., physics, chemistry, biology) are so well established that we readily accept their best-supported conclusions as definitive. 
  • ...10 more annotations...
  • While the physical sciences produce many detailed and precise predictions, the social sciences do not.  The reason is that such predictions almost always require randomized controlled experiments, which are seldom possible when people are involved.  For one thing, we are too complex: our behavior depends on an enormous number of tightly interconnected variables that are extraordinarily difficult to  distinguish and study separately
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
  • How much authority should we give to such work in our policy decisions?  The question is important because media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention.
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Social sciences may be surrounded by the “paraphernalia” of the natural sciences, such as technical terminology, mathematical equations, empirical data and even carefully designed experiments. 
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
Javier E

Why it's as hard to escape an echo chamber as it is to flee a cult | Aeon Essays - 0 views

  • there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs.
  • they work in entirely different ways, and they require very different modes of intervention
  • An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.
  • ...90 more annotations...
  • start with epistemic bubbles
  • That omission might be purposeful
  • But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests
  • An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited. Where an epistemic bubble merely omits contrary views, an echo chamber brings its members to actively distrust outsiders.
  • an echo chamber is something like a cult. A cult isolates its members by actively alienating them from any outside sources. Those outside are actively labelled as malignant and untrustworthy.
  • In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined.
  • The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.
  • Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly
  • They have been in the limelight lately, most famously in Eli Pariser’s The Filter Bubble (2011) and Cass Sunstein’s #Republic: Divided Democracy in the Age of Social Media (2017).
  • The general gist: we get much of our news from Facebook feeds and similar sorts of social media. Our Facebook feed consists mostly of our friends and colleagues, the majority of whom share our own political and cultural views
  • various algorithms behind the scenes, such as those inside Google search, invisibly personalise our searches, making it more likely that we’ll see only what we want to see. These processes all impose filters on information.
  • Such filters aren’t necessarily bad. The world is overstuffed with information, and one can’t sort through it all by oneself: filters need to be outsourced.
  • That’s why we all depend on extended social networks to deliver us knowledge
  • any such informational network needs the right sort of broadness and variety to work
  • Each individual person in my network might be superbly reliable about her particular informational patch but, as an aggregate structure, my network lacks what Sanford Goldberg in his book Relying on Others (2010) calls ‘coverage-reliability’. It doesn’t deliver to me a sufficiently broad and representative coverage of all the relevant information.
  • Epistemic bubbles also threaten us with a second danger: excessive self-confidence.
  • An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission
  • Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection.
  • Luckily, though, epistemic bubbles are easily shattered. We can pop an epistemic bubble simply by exposing its members to the information and arguments that they’ve missed.
  • echo chambers are a far more pernicious and robust phenomenon.
  • amieson and Cappella’s book is the first empirical study into how echo chambers function
  • echo chambers work by systematically alienating their members from all outside epistemic sources.
  • Their research centres on Rush Limbaugh, a wildly successful conservative firebrand in the United States, along with Fox News and related media
  • His constant attacks on the ‘mainstream media’ are attempts to discredit all other sources of knowledge. He systematically undermines the integrity of anybody who expresses any kind of contrary view.
  • outsiders are not simply mistaken – they are malicious, manipulative and actively working to destroy Limbaugh and his followers. The resulting worldview is one of deeply opposed force, an all-or-nothing war between good and evil
  • The result is a rather striking parallel to the techniques of emotional isolation typically practised in cult indoctrination
  • cult indoctrination involves new cult members being brought to distrust all non-cult members. This provides a social buffer against any attempts to extract the indoctrinated person from the cult.
  • The echo chamber doesn’t need any bad connectivity to function. Limbaugh’s followers have full access to outside sources of information
  • As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain
  • Their worldview can survive exposure to those outside voices because their belief system has prepared them for such intellectual onslaught.
  • exposure to contrary views could actually reinforce their views. Limbaugh might offer his followers a conspiracy theory: anybody who criticises him is doing it at the behest of a secret cabal of evil elites, which has already seized control of the mainstream media.
  • Perversely, exposure to outsiders with contrary views can thus increase echo-chamber members’ confidence in their insider sources, and hence their attachment to their worldview.
  • ‘evidential pre-emption’. What’s happening is a kind of intellectual judo, in which the power and enthusiasm of contrary voices are turned against those contrary voices through a carefully rigged internal structure of belief.
  • One might be tempted to think that the solution is just more intellectual autonomy. Echo chambers arise because we trust others too much, so the solution is to start thinking for ourselves.
  • that kind of radical intellectual autonomy is a pipe dream. If the philosophical study of knowledge has taught us anything in the past half-century, it is that we are irredeemably dependent on each other in almost every domain of knowledge
  • Limbaugh’s followers regularly read – but do not accept – mainstream and liberal news sources. They are isolated, not by selective exposure, but by changes in who they accept as authorities, experts and trusted sources.
  • we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.
  • I am quite confident that there are plenty of echo chambers on the political Left. More importantly, nothing about echo chambers restricts them to the arena of politics
  • The world of anti-vaccination is clearly an echo chamber, and it is one that crosses political lines. I’ve also encountered echo chambers on topics as broad as diet (Paleo!), exercise technique (CrossFit!), breastfeeding, some academic intellectual traditions, and many, many more
  • Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.
  • much of the recent analysis has lumped epistemic bubbles together with echo chambers into a single, unified phenomenon. But it is absolutely crucial to distinguish between the two.
  • Epistemic bubbles are rather ramshackle; they go up easily, and they collapse easily
  • Echo chambers are far more pernicious and far more robust. They can start to seem almost like living things. Their belief systems provide structural integrity, resilience and active responses to outside attacks
  • the two phenomena can also exist independently. And of the events we’re most worried about, it’s the echo-chamber effects that are really causing most of the trouble.
  • new data does, in fact, seem to show that people on Facebook actually do see posts from the other side, or that people often visit websites with opposite political affiliation.
  • their basis for evaluation – their background beliefs about whom to trust – are radically different. They are not irrational, but systematically misinformed about where to place their trust.
  • Many people have claimed that we have entered an era of ‘post-truth’.
  • Not only do some political figures seem to speak with a blatant disregard for the facts, but their supporters seem utterly unswayed by evidence. It seems, to some, that truth no longer matters.
  • This is an explanation in terms of total irrationality. To accept it, you must believe that a great number of people have lost all interest in evidence or investigation, and have fallen away from the ways of reason.
  • echo chambers offers a less damning and far more modest explanation. The apparent ‘post-truth’ attitude can be explained as the result of the manipulations of trust wrought by echo chambers.
  • We don’t have to attribute a complete disinterest in facts, evidence or reason to explain the post-truth attitude. We simply have to attribute to certain communities a vastly divergent set of trusted authorities.
  • An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.
  • in many ways, echo-chamber members are following reasonable and rational procedures of enquiry. They’re engaging in critical reasoning. They’re questioning, they’re evaluating sources for themselves, they’re assessing different pathways to information. They are critically examining those who claim expertise and trustworthiness, using what they already know about the world
  • none of this weighs against the existence of echo chambers. We should not dismiss the threat of echo chambers based only on evidence about connectivity and exposure.
  • Notice how different what’s going on here is from, say, Orwellian doublespeak, a deliberately ambiguous, euphemism-filled language designed to hide the intent of the speaker.
  • echo chambers don’t trade in vague, ambiguous pseudo-speech. We should expect that echo chambers would deliver crisp, clear, unambiguous claims about who is trustworthy and who is not
  • clearly articulated conspiracy theories, and crisply worded accusations of an outside world rife with untrustworthiness and corruption.
  • Once an echo chamber starts to grip a person, its mechanisms will reinforce themselves.
  • In an epistemically healthy life, the variety of our informational sources will put an upper limit to how much we’re willing to trust any single person. Everybody’s fallible; a healthy informational network tends to discover people’s mistakes and point them out. This puts an upper ceiling on how much you can trust even your most beloved leader
  • nside an echo chamber, that upper ceiling disappears.
  • Being caught in an echo chamber is not always the result of laziness or bad faith. Imagine, for instance, that somebody has been raised and educated entirely inside an echo chamber
  • when the child finally comes into contact with the larger world – say, as a teenager – the echo chamber’s worldview is firmly in place. That teenager will distrust all sources outside her echo chamber, and she will have gotten there by following normal procedures for trust and learning.
  • It certainly seems like our teenager is behaving reasonably. She could be going about her intellectual life in perfectly good faith. She might be intellectually voracious, seeking out new sources, investigating them, and evaluating them using what she already knows.
  • The worry is that she’s intellectually trapped. Her earnest attempts at intellectual investigation are led astray by her upbringing and the social structure in which she is embedded.
  • Echo chambers might function like addiction, under certain accounts. It might be irrational to become addicted, but all it takes is a momentary lapse – once you’re addicted, your internal landscape is sufficiently rearranged such that it’s rational to continue with your addiction
  • Similarly, all it takes to enter an echo chamber is a momentary lapse of intellectual vigilance. Once you’re in, the echo chamber’s belief systems function as a trap, making future acts of intellectual vigilance only reinforce the echo chamber’s worldview.
  • There is at least one possible escape route, however. Notice that the logic of the echo chamber depends on the order in which we encounter the evidence. An echo chamber can bring our teenager to discredit outside beliefs precisely because she encountered the echo chamber’s claims first. Imagine a counterpart to our teenager who was raised outside of the echo chamber and exposed to a wide range of beliefs. Our free-range counterpart would, when she encounters that same echo chamber, likely see its many flaws
  • Those caught in an echo chamber are giving far too much weight to the evidence they encounter first, just because it’s first. Rationally, they should reconsider their beliefs without that arbitrary preference. But how does one enforce such informational a-historicity?
  • The escape route is a modified version of René Descartes’s infamous method.
  • Meditations on First Philosophy (1641). He had come to realise that many of the beliefs he had acquired in his early life were false. But early beliefs lead to all sorts of other beliefs, and any early falsehoods he’d accepted had surely infected the rest of his belief system.
  • The only solution, thought Descartes, was to throw all his beliefs away and start over again from scratch.
  • He could start over, trusting nothing and no one except those things that he could be entirely certain of, and stamping out those sneaky falsehoods once and for all. Let’s call this the Cartesian epistemic reboot.
  • Notice how close Descartes’s problem is to our hapless teenager’s, and how useful the solution might be. Our teenager, like Descartes, has problematic beliefs acquired in early childhood. These beliefs have infected outwards, infesting that teenager’s whole belief system. Our teenager, too, needs to throw everything away, and start over again.
  • Let’s call the modernised version of Descartes’s methodology the social-epistemic reboot.
  • when she starts from scratch, we won’t demand that she trust only what she’s absolutely certain of, nor will we demand that she go it alone
  • For the social reboot, she can proceed, after throwing everything away, in an utterly mundane way – trusting her senses, trusting others. But she must begin afresh socially – she must reconsider all possible sources of information with a presumptively equanimous eye. She must take the posture of a cognitive newborn, open and equally trusting to all outside sources
  • we’re not asking people to change their basic methods for learning about the world. They are permitted to trust, and trust freely. But after the social reboot, that trust will not be narrowly confined and deeply conditioned by the particular people they happened to be raised by.
  • Such a profound deep-cleanse of one’s whole belief system seems to be what’s actually required to escape. Look at the many stories of people leaving cults and echo chambers
  • Take, for example, the story of Derek Black in Florida – raised by a neo-Nazi father, and groomed from childhood to be a neo-Nazi leader. Black left the movement by, basically, performing a social reboot. He completely abandoned everything he’d believed in, and spent years building a new belief system from scratch. He immersed himself broadly and open-mindedly in everything he’d missed – pop culture, Arabic literature, the mainstream media, rap – all with an overall attitude of generosity and trust.
  • It was the project of years and a major act of self-reconstruction, but those extraordinary lengths might just be what’s actually required to undo the effects of an echo-chambered upbringing.
  • we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.
  • Stories of actual escapes from echo chambers often turn on particular encounters – moments when the echo-chambered individual starts to trust somebody on the outside.
  • Black’s is case in point. By high school, he was already something of a star on neo-Nazi media, with his own radio talk-show. He went on to college, openly neo-Nazi, and was shunned by almost every other student in his community college. But then Matthew Stevenson, a Jewish fellow undergraduate, started inviting Black to Stevenson’s Shabbat dinners. In Black’s telling, Stevenson was unfailingly kind, open and generous, and slowly earned Black’s trust. This was the seed, says Black, that led to a massive intellectual upheaval – a slow-dawning realisation of the depths to which he had been misled
  • Similarly, accounts of people leaving echo-chambered homophobia rarely involve them encountering some institutionally reported fact. Rather, they tend to revolve around personal encounters – a child, a family member, a close friend coming out.
  • hese encounters matter because a personal connection comes with a substantial store of trust.
  • We don’t simply trust people as educated experts in a field – we rely on their goodwill. And this is why trust, rather than mere reliability, is the key concept
  • goodwill is a general feature of a person’s character. If I demonstrate goodwill in action, then you have some reason to think that I also have goodwill in matters of thought and knowledge.
  • f one can demonstrate goodwill to an echo-chambered member – as Stevenson did with Black – then perhaps one can start to pierce that echo chamber.
  • the path I’m describing is a winding, narrow and fragile one. There is no guarantee that such trust can be established, and no clear path to its being established systematically.
  • what we’ve found here isn’t an escape route at all. It depends on the intervention of another. This path is not even one an echo-chamber member can trigger on her own; it is only a whisper-thin hope for rescue from the outside.
sandrine_h

Top court says evidence from hypnosis not reliable - Canada - CBC News - 0 views

  • The Supreme Court of Canada ruled Thursday that evidence obtained through hypnosis should not be used in criminal cases because testimony based on such evidence is not "sufficiently reliable" in a court of law.
  • evidence obtained through hypnosis has been used byCanadian courtsfor nearly 30 years.
  • the technique of hypnosis and its impact on human memory are not understood well enough for post-hypnosis testimony to be sufficiently reliable in a court of law
  • ...4 more annotations...
  • hypnosis can, in certain circumstances, result in the distortion of memory.
  • Initially, the neighbour told police she saw Trochym on the afternoon of Thursday, Oct. 15, 1992,but after she underwent hypnosis at the request of police, she remembered she sawthe accusedleave on Wednesday afternoon.
  • In its ruling, the court said the dangers posed by problems with the evidence could deprive an accused of a fair trial.
  • But dissenting judges, in their reasons, expressed concern about the majority ruling in which hypnosis is described as a "novel science" and "hypnotically refreshed memories" are now consideredinadmissible as evidence. "This ignores the fact that the technique has been used in Canada for almost 30 years, and has been employed in Canadian criminal investigations to assist in memory retrieval of both Crown and defence witnesses for a similar amount of time," they wrote. "Hypnosis is not new science, nor is its use in forensic investigation new."
Javier E

Opinion | A New Dark Age Looms - The New York Times - 0 views

  • IMAGINE a future in which humanity’s accumulated wisdom about Earth — our vast experience with weather trends, fish spawning and migration patterns, plant pollination and much more — turns increasingly obsolete. As each decade passes, knowledge of Earth’s past becomes progressively less effective as a guide to the future. Civilization enters a dark age in its practical understanding of our planet.
  • as Earth warms, our historical understanding will turn obsolete faster than we can replace it with new knowledge. Some patterns will change significantly; others will be largely unaffected, though it will be difficult to say what will change, by how much, and when.
  • Until then, farmers will struggle to reliably predict new seasonal patterns and regularly plant the wrong crops. Early signs of major drought will go unrecognized, so costly irrigation will be built in the wrong places. Disruptive societal impacts will be widespread.
  • ...11 more annotations...
  • Such a dark age is a growing possibility. In a recent report, the National Academies of Sciences, Engineering and Medicine concluded that human-caused global warming was already altering patterns of some extreme weather events
  • disrupting nature’s patterns could extend well beyond extreme weather, with far more pervasive impacts.
  • Our foundation of Earth knowledge, largely derived from historically observed patterns, has been central to society’s progress.
  • Science has accelerated this learning process through advanced observation methods and pattern discovery techniques. These allow us to anticipate the future with a consistency unimaginable to our ancestors
  • As Earth’s warming stabilizes, new patterns begin to appear. At first, they are confusing and hard to identify. Scientists note similarities to Earth’s emergence from the last ice age. These new patterns need many years — sometimes decades or more — to reveal themselves fully, even when monitored with our sophisticated observing systems
  • The list of possible disruptions is long and alarming. We could see changes to the prevalence of crop and human pests, like locust plagues set off by drought conditions; forest fire frequency; the dynamics of the predator-prey food chain; the identification and productivity of reliably arable land, and the predictability of agriculture output.
  • Historians of the next century will grasp the importance of this decline in our ability to predict the future. They may mark the coming decades of this century as the period during which humanity, despite rapid technological and scientific advances, achieved “peak knowledge” about the planet it occupies
  • The intermediate time period is our big challenge. Without substantial scientific breakthroughs, we will remain reliant on pattern-based methods for time periods between a month and a decade. The problem is, as the planet warms, these patterns will become increasingly difficult to discern.
  • The oceans, which play a major role in global weather patterns, will also see substantial changes as global temperatures rise. Ocean currents and circulation patterns evolve on time scales of decades and longer, and fisheries change in response. We lack reliable, physics-based models to tell us how this occurs
  • Civilization’s understanding of Earth has expanded enormously in recent decades, making humanity safer and more prosperous. As the patterns that we have come to expect are disrupted by warming temperatures, we will face huge challenges feeding a growing population and prospering within our planet’s finite resources. New developments in science offer our best hope for keeping up, but this is by no means guaranteed
  • Our grandchildren could grow up knowing less about the planet than we do today. This is not a legacy we want to leave them. Yet we are on the verge of ensuring this happens.
sissij

What Facebook Owes to Journalism - The New York Times - 0 views

  • declared that “a strong news industry is also critical to building an informed community.”
  • Unfortunately, his memo ignored two major points — the role that Facebook and other technology platforms are playing in inadvertently damaging local news media, and the one way they could actually save journalism: with a massive philanthropic commitment.
  • As advertising spending shifted from print, TV and radio to the internet, the money didn’t mostly go to digital news organizations. Increasingly, it goes to Facebook and Google.
  • ...2 more annotations...
  • But just because the result is unintentional doesn’t mean it is fantasy: Newsrooms have been decimated, with basic accountability reporting slashed as a result.
  • I’m not saying that the good stuff — the mobile revolution, blocking intrusive ads, better marketing options for small businesses — doesn’t outweigh the bad. And local news organizations absolutely contributed to the problem with their sluggish and often uncreative reaction to the digital revolution.
  •  
    This article discuss the impact of internet on local news organizations. I agree with the author that the internet do get a lot of ad money and make local news organizations have less funding. Although there are donations, it is still very little compare to what local news organizations used to have. This might be part of the reason why local news organizations don't do well on giving great informations.But I think the time is moving forward, Facebook and google should take some of the responsibility as they get more funding and resources. This article is very persuasive as it has many data and evidence in support. I really like that the author acknowledge the counterargument in his article to make it more reliable. --Sissi (2/22/2017)
sissij

By Demanding Too Much from Science, We Became a Post-Truth Society | Big Think - 1 views

  • The number of people who today openly question reality are not the tin-foil hat-wearing kind. Increasingly they are our friends, and those who hold positions of power.
  • Indeed, the public understanding of what constitutes valid evidence, and a worthy expert opinion, seems to be at an all time low.
  • Well, a new study suggests that this wealth of information might be the problem.
  • ...6 more annotations...
  • A new study out of Germany has found that people are much more confident in the claims of a popular science article then they are in the claims of an academic article written for experts
  • It was also found that the subjects were more confident in their own judgments after reading a popular article, and that this was tied to a lessened desire to seek out more information from expert sources.
  • "easiness effect”
  • the issue arises from the manner in which popular science is presented; as opposed to how scientists themselves present data to each other and to the public.
  • This emboldens people to reject the ideas of experts who they see as superfluous to their understanding of an idea (which they have already grasped).
  • notably health
  •  
    Although many people allege themselves being scientific when trying to convince others by using the scientific researches they read on the mass media, does that really make their points more reliable? Not really. The popular science is sometimes not as meticulous as the academic article article written for experts. In popular science articles, the authors often changed their writing style to favor the general population, like having a more certain tone. This appeals to readers' desire for simplicity and this tendency is called the "easiness effect", which I find is really similar to the logic fallacy we talked about in TOK. Science itself has more and more become a table that can make an argument seem more rational. However, science is all about the scientific method used in the research that is an art of systematic simplification. Without these element, the title "science" means nothing. --Sissi (2/10/2017)
oliviaodon

What is George Orwell's 1984 about, why have sales soared since Trump adviser Kellyanne Conway referred to 'alternative facts' and what's happening on April 4? - 0 views

  • GEORGE Orwell’s dystopian novel 1984 has had “doublegood” sales this week after one of Trump’s advisers used the phrase “alternative facts” in an interview.
  • Orwell's novel 1984 is a bleak portrayal of Great Britain re-imagined as a dystopian superstate governed by a dictatorial regime.
  • Many concepts of the novel have crossed over to popular culture or have entered common use in everyday life - the repressive regime is overseen by Big Brother, and the government's invented language "newspeak" was designed to limit freedom of thought. The term "doublethink" - where a person can accept two contradicting beliefs as both being correct - first emerged in the dystopian landscape of Airstrip One.
  • ...2 more annotations...
  • The public started drawing comparisons between the Inner Party's regime and Trump's presidency when his adviser used the phrase "alternative facts" in an interview. Kellyanne Conway was being quizzed after the White House press secretary Sean Spicer apparently lied about the number of people who attended Trump's inauguration. The presenter asked why President Trump has asked Spicer to come out to speak to the press and "utter a falsehood". Conway responded that Spicer didn't utter a falsehood but gave "alternative facts". People drew comparisons with "newspeak" which was aimed at wiping out original thought. Her chose of language was also accused of representing "doublespeak" - which Orwell wrote "means the power of holding two contradictory beliefs in one's mind simultaneously." Washington Post reporter Karen Tumulty said: "Alternative facts is a George Orwell phrase".
  • Sales of 1984 also soared in 2013 when news broke of the National Security Administration's Prism surveillance scandal.
  •  
    *note: the Sun is not a reliable source, but I thought this was an interesting read nonetheless
sissij

The Purpose of Sleep? To Forget, Scientists Say - The New York Times - 1 views

  • Some have argued that it’s a way to save energy. Others have suggested that slumber provides an opportunity to clear away the brain’s cellular waste. Still others have proposed that sleep simply forces animals to lie still, letting them hide from predators.
  • It turns out, for example, that neurons can prune their synapses — at least in a dish.
  • Dr. Diering and his colleagues then searched for the molecular trigger for this change. They found that hundreds of proteins increase or decrease inside of synapses during the night. But one protein in particular, called Homer1A, stood out.
  • ...1 more annotation...
  • “Once you know a little bit of what happens at the ground-truth level, you can get a better idea of what to do for therapy,” Dr. Tononi said.
  •  
    I find this article very interesting. Everyday, there are all sorts of articles alleges that scientist says this and that. Sometimes, they even contradicts each other. I feel like the science today on the newspaper is hardly reliable. Since science is a social project that's only accessible for a community of specialists. The general population usually plays a role of acceptors. Then many mass media uses the name of science to put up claims that mislead the people. It's really hard for us, the general population, to make sure what we read about on newspaper science section is really science, not another piece of fake news. --Sissi (2/4/2017)
Javier E

"Wikipedia Is Not Truth" - The Dish | By Andrew Sullivan - The Daily Beast - 0 views

  • entriesOnPage.push("6a00d83451c45669e20168e7872016970c"); facebookButtons['6a00d83451c45669e20168e7872016970c'] = ''; twitterButtons['6a00d83451c45669e20168e7872016970c'] = ''; email permalink 20 Feb 2012 12:30 PM "Wikipedia Is Not Truth" Timothy Messer-Kruse tried to update the Wiki page on the Haymarket riot of 1886 to correct a long-standing inaccurate claim. Even though he's written two books and numerous articles on the subject, his changes were instantly rejected: I had cited the documents that proved my point, including verbatim testimony from the trial published online by the Library of Congress. I also noted one of my own peer-reviewed articles. One of the people who had assumed the role of keeper of this bit of history for Wikipedia quoted the Web site's "undue weight" policy, which states that "articles should not give minority views as much or as detailed a description as more popular views."
  • "Explain to me, then, how a 'minority' source with facts on its side would ever appear against a wrong 'majority' one?" I asked the Wiki-gatekeeper. ...  Another editor cheerfully tutored me in what this means: "Wikipedia is not 'truth,' Wikipedia is 'verifiability' of reliable sources. Hence, if most secondary sources which are taken as reliable happen to repeat a flawed account or description of something, Wikipedia will echo that."
Roth johnson

'Life Keeps Changing': Why Stories, Not Science, Explain the World - Joe Fassler - The Atlantic - 0 views

  •  
    Exactly what we're talking about! Are stories more reliable than science or vise versa?
gszumel

North Korea Deletes 350,000 Articles From Its Highly Reliable State-Run News Site - Yahoo News - 1 views

  •  
    North Korea attempting to alter history by deleting it.
Javier E

After the Fact - The New Yorker - 1 views

  • newish is the rhetoric of unreality, the insistence, chiefly by Democrats, that some politicians are incapable of perceiving the truth because they have an epistemological deficit: they no longer believe in evidence, or even in objective reality.
  • the past of proof is strange and, on its uncertain future, much in public life turns. In the end, it comes down to this: the history of truth is cockamamie, and lately it’s been getting cockamamier.
  • . Michael P. Lynch is a philosopher of truth. His fascinating new book, “The Internet of Us: Knowing More and Understanding Less in the Age of Big Data,” begins with a thought experiment: “Imagine a society where smartphones are miniaturized and hooked directly into a person’s brain.” As thought experiments go, this one isn’t much of a stretch. (“Eventually, you’ll have an implant,” Google’s Larry Page has promised, “where if you think about a fact it will just tell you the answer.”) Now imagine that, after living with these implants for generations, people grow to rely on them, to know what they know and forget how people used to learn—by observation, inquiry, and reason. Then picture this: overnight, an environmental disaster destroys so much of the planet’s electronic-communications grid that everyone’s implant crashes. It would be, Lynch says, as if the whole world had suddenly gone blind. There would be no immediate basis on which to establish the truth of a fact. No one would really know anything anymore, because no one would know how to know. I Google, therefore I am not.
  • ...20 more annotations...
  • In England, the abolition of trial by ordeal led to the adoption of trial by jury for criminal cases. This required a new doctrine of evidence and a new method of inquiry, and led to what the historian Barbara Shapiro has called “the culture of fact”: the idea that an observed or witnessed act or thing—the substance, the matter, of fact—is the basis of truth and the only kind of evidence that’s admissible not only in court but also in other realms where truth is arbitrated. Between the thirteenth century and the nineteenth, the fact spread from law outward to science, history, and journalism.
  • Lynch isn’t terribly interested in how we got here. He begins at the arrival gate. But altering the flight plan would seem to require going back to the gate of departure.
  • Lynch thinks we are frighteningly close to this point: blind to proof, no longer able to know. After all, we’re already no longer able to agree about how to know. (See: climate change, above.)
  • We now only rarely discover facts, Lynch observes; instead, we download them.
  • For the length of the eighteenth century and much of the nineteenth, truth seemed more knowable, but after that it got murkier. Somewhere in the middle of the twentieth century, fundamentalism and postmodernism, the religious right and the academic left, met up: either the only truth is the truth of the divine or there is no truth; for both, empiricism is an error.
  • That epistemological havoc has never ended: much of contemporary discourse and pretty much all of American politics is a dispute over evidence. An American Presidential debate has a lot more in common with trial by combat than with trial by jury,
  • came the Internet. The era of the fact is coming to an end: the place once held by “facts” is being taken over by “data.” This is making for more epistemological mayhem, not least because the collection and weighing of facts require investigation, discernment, and judgment, while the collection and analysis of data are outsourced to machines
  • “Most knowing now is Google-knowing—knowledge acquired online,”
  • Empiricists believed they had deduced a method by which they could discover a universe of truth: impartial, verifiable knowledge. But the movement of judgment from God to man wreaked epistemological havoc.
  • “The Internet didn’t create this problem, but it is exaggerating it,”
  • nothing could be less well settled in the twenty-first century than whether people know what they know from faith or from facts, or whether anything, in the end, can really be said to be fully proved.
  • In his 2012 book, “In Praise of Reason,” Lynch identified three sources of skepticism about reason: the suspicion that all reasoning is rationalization, the idea that science is just another faith, and the notion that objectivity is an illusion. These ideas have a specific intellectual history, and none of them are on the wane.
  • Their consequences, he believes, are dire: “Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • When we Google-know, Lynch argues, we no longer take responsibility for our own beliefs, and we lack the capacity to see how bits of facts fit into a larger whole
  • Essentially, we forfeit our reason and, in a republic, our citizenship. You can see how this works every time you try to get to the bottom of a story by reading the news on your smartphone.
  • what you see when you Google “Polish workers” is a function of, among other things, your language, your location, and your personal Web history. Reason can’t defend itself. Neither can Google.
  • rump doesn’t reason. He’s a lot like that kid who stole my bat. He wants combat. Cruz’s appeal is to the judgment of God. “Father God, please . . . awaken the body of Christ, that we might pull back from the abyss,” he preached on the campaign trail. Rubio’s appeal is to Google.
  • Is there another appeal? People who care about civil society have two choices: find some epistemic principles other than empiricism on which everyone can agree or else find some method other than reason with which to defend empiricism
  • Lynch suspects that doing the first of these things is not possible, but that the second might be. He thinks the best defense of reason is a common practical and ethical commitment.
  • That, anyway, is what Alexander Hamilton meant in the Federalist Papers, when he explained that the United States is an act of empirical inquiry: “It seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.”
Javier E

A New Dark Age Looms - The New York Times - 1 views

  • picture yourself in our grandchildren’s time, a century hence. Significant global warming has occurred, as scientists predicted. Nature’s longstanding, repeatable patterns — relied on for millenniums by humanity to plan everything from infrastructure to agriculture — are no longer so reliable. Cycles that have been largely unwavering during modern human history are disrupted by substantial changes in temperature and precipitation.
  • As Earth’s warming stabilizes, new patterns begin to appear. At first, they are confusing and hard to identify. Scientists note similarities to Earth’s emergence from the last ice age. These new patterns need many years — sometimes decades or more — to reveal themselves fully, even when monitored with our sophisticated observing systems
  • Disruptive societal impacts will be widespread.
  • ...9 more annotations...
  • Our foundation of Earth knowledge, largely derived from historically observed patterns, has been central to society’s progress. Early cultures kept track of nature’s ebb and flow, passing improved knowledge about hunting and agriculture to each new generation. Science has accelerated this learning process through advanced observation methods and pattern discovery techniques. These allow us to anticipate the future with a consistency unimaginable to our ancestors.
  • But as Earth warms, our historical understanding will turn obsolete faster than we can replace it with new knowledge. Some patterns will change significantly; others will be largely unaffected
  • The list of possible disruptions is long and alarming.
  • Historians of the next century will grasp the importance of this decline in our ability to predict the future. They may mark the coming decades of this century as the period during which humanity, despite rapid technological and scientific advances, achieved “peak knowledge” about the planet it occupies
  • One exception to this pattern-based knowledge is the weather, whose underlying physics governs how the atmosphere moves and adjusts. Because we understand the physics, we can replicate the atmosphere with computer models.
  • But farmers need to think a season or more ahead. So do infrastructure planners as they design new energy and water systems
  • The intermediate time period is our big challenge. Without substantial scientific breakthroughs, we will remain reliant on pattern-based methods for time periods between a month and a decade. The problem is, as the planet warms, these patterns will become increasingly difficult to discern.
  • The oceans, which play a major role in global weather patterns, will also see substantial changes as global temperatures rise. Ocean currents and circulation patterns evolve on time scales of decades and longer, and fisheries change in response. We lack reliable, physics-based models to tell us how this occurs.
  • Our grandchildren could grow up knowing less about the planet than we do today. This is not a legacy we want to leave them. Yet we are on the verge of ensuring this happens.
sissij

Unsealed Documents Raise Questions on Monsanto Weed Killer - The New York Times - 0 views

  • The court documents included Monsanto’s internal emails and email traffic between the company and federal regulators. The records suggested that Monsanto had ghostwritten research that was later attributed to academics and indicated that a senior official at the Environmental Protection Agency had worked to quash a review of Roundup’s main ingredient, glyphosate, that was to have been conducted by the United States Department of Health and Human Services.
  • The safety of glyphosate is not settled science.
  • In a statement, Monsanto said, “Glyphosate is not a carcinogen.”
  • ...3 more annotations...
  • Monsanto also rebutted suggestions that the disclosures highlighted concerns that the academic research it underwrites is compromised.
  • they could ghostwrite research on glyphosate by hiring academics to put their names on papers that were actually written by Monsanto.
  • The issue of glyphosate’s safety is not a trivial one for Americans. Over the last two decades, Monsanto has genetically re-engineered corn, soybeans and cotton so it is much easier to spray them with the weed killer, and some 220 million pounds of glyphosate were used in 2015 in the United States.
  •  
    This news shows that there are a lot of cases that companies use science as a shield to convince people that their product is safe and good. Honesty is scientific papers has always been an important issue when we talk about the reliability of those papers. As we discussed in TOK, science is more like a social project that involves a lot of people and all human works are more or less biased and subjective. Now, science is intertwined with benefit and economics so the issue become much more complicated. I think we should identify the sources of the paper before citing any word from the paper because who write the paper is a big factor of which side the paper is standing. --Sissi (3/14/2017)
dicindioha

Daniel Kahneman On Hiring Decisions - Business Insider - 0 views

  • Most hiring decisions come down to a gut decision. According to Nobel laureate Daniel Kahneman, however, this process is extremely flawed and there's a much better way.
    • dicindioha
       
      hiring comes down to 'gut feeling'
  • Kahneman asked interviewers to put aside personal judgments and limit interviews to a series of factual questions meant to generate a score on six separate personality traits. A few months later, it became clear that Kahneman's systematic approach was a vast improvement over gut decisions. It was so effective that the army would use his exact method for decades to come. Why you should care is because this superior method can be copied by any organization — and really, by anyone facing a hard decision.
  • First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on. Don't overdo it — six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you will score it, say on a 1-5 scale. You should have an idea of what you will call "very weak" or "very strong."
    • dicindioha
       
      WHAT YOU SHOULD DO IN AN INTERVIEW
  • ...2 more annotations...
  • Do not skip around. To evaluate each candidate add up the six scores ... Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better — try to resist your wish to invent broken legs to change the ranking.
  • than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as "I looked into his eyes and liked what I saw."
  •  
    we cannot always use simply a 'gut feeling' from our so called 'reasoning' and emotional response to make big decisions like job hiring, which is what happens much of the time. this is a really interesting way to do it systematically. you still use your own perspective, but the questions asked will hopefully lead you to a better outcome
sissij

Is Crime Forensics Flawed? | Big Think - 0 views

  • This is concerning because in recent years, time honored methods such as fingerprinting, hair and fiber analysis, firearm analysis, and others, have come under intense scrutiny.
  • Sessions plans to replace the commission with an internal body called the department crime task force, headed by a senior forensic adviser who will report to him directly. No one has been named for the position as of yet.
  • Since 1989, DNA evidence has exonerated 329 individuals. Bite mark and hair analysis—part of what is known as pattern forensics, helped convict 25% of them.
  •  
    I have long been interested in forensics. There was a late night show called forensic files that I really like to watch. As I learned in biology that human fingerprint is very unique, I have never imagined that there are actually serious flaws in forensics. As long as there is human involvement in this activity, it couldn't be one hundred percent reliable. --Sissi (4/21/2017)
oliviaodon

Neil deGrasse Tyson: Science Deniers In Power Are A Profound Threat To Democracy | The Huffington Post - 0 views

  • The U.S. grew from a “backwoods country” to one of “greatest nations the world has ever known” thanks to science — but that pillar of America is eroding, astrophysicist Neil deGrasse Tyson warns.
  • Science deniers “rising to power” now create a “recipe for the complete dismantling of our informed democracy,”
  • “People have lost the ability to judge what is true and what is not, what is reliable, what is not reliable,” he says in the above video, which he posted to Facebook Wednesday. “That’s not the country I remember growing up in. I don’t remember any other time where people were standing in denial of what science was.”
  • ...2 more annotations...
  • Tyson praises science as an “exercise in finding what is true” that’s based on peer-reviewed experimentation backed by other experiments and counter-experiments that gives birth to an “emergent truth.” He points out that science is “not something to toy with.” “You can’t say, ‘I chose not to believe in E=mc2,’” he says, referring to physicist Albert Einstein’s corroborated theory of special relativity. “You don’t have that option. It is true, whether or not you believe in it.”
  • Tyson warns that every minute someone is in denial of a scientific truth delays the “political solution that should have been established years ago.”  “Recognize what science is, and allow to be what it can and should be: In the service of civilization,” he says. “It’s in our hands.”
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
1 - 20 of 130 Next › Last »
Showing 20 items per page