Skip to main content

Home/ TOK Friends/ Group items tagged verification

Rss Feed Group items tagged

1More

Buy Verified PayPal Account - UK - 0 views

  •  
    No! We provide personal and business both of accounts. We use personal documents for personal account verification. And we use business documents for business PayPal account verification. We don't use fake documents for verification. We use USA/UK/CA original citizen documents. That's why our all-verified PayPal accounts you can use till minimum 2 years without getting any problem.
2More

The Oscar for Best Fabrication - NYTimes.com - 0 views

  • Hollywood always wants it both ways, of course, but this Oscar season is rife with contenders who bank on the authenticity of their films until it’s challenged, and then fall back on the “Hey, it’s just a movie” defense.
  • “Lincoln,” which had three historical advisers but still managed to make some historical bloopers. Joe Courtney, a Democratic congressman from Connecticut, recently wrote to Steven Spielberg to complain that “Lincoln” falsely showed two of Connecticut’s House members voting “Nay” against the 13th Amendment for the abolition of slavery.
4More

Instagram introduces two-factor authentication | Technology | The Guardian - 0 views

  • Instagram has become the latest social network to enable two-factor authentication, a valuable security feature that protects accounts from being compromised due to password reuse or phishing.
  • Instagram joins Facebook, Twitter, Google and many others in offering some form of two-factor verification.
  • Confusingly for users, all the methods are slightly different: Twitter requires logging in to be approved by opening the app on a trusted device, and Google uses an open standard to link up with its authenticator app, which generates new six-digit codes every 30 seconds.
  •  
    Internet security has been a big problem since the development of internet technology. There are a lot of worries especially on the safety of the account. People put more and more things online and security risk become an issue. For example, there are a lot of pay online apps that enable you to pay without using actually money, just charging automatically from your bank account. Although it is very convenient to have everything online, it is very unstable and risky at the same time. --Sissi (3/25/2017)
5More

Big Data Troves Stay Forbidden to Social Scientists - NYTimes.com - 0 views

  • When scientists publish their research, they also make the underlying data available so the results can be verified by other scientists.
  • lately social scientists have come up against an exception that is, true to its name, huge. It is “big data,” the vast sets of information gathered by researchers at companies like Facebook, Google and Microsoft from patterns of cellphone calls, text messages and Internet clicks by millions of users around the world. Companies often refuse to make such information public, sometimes for competitive reasons and sometimes to protect customers’ privacy. But to many scientists, the practice is an invitation to bad science, secrecy and even potential fraud.
  • corporate control of data could give preferential access to an elite group of scientists at the largest corporations.
  • ...2 more annotations...
  • “In the Internet era,” said Andreas Weigend, a physicist and former chief scientist at Amazon, “research has moved out of the universities to the Googles, Amazons and Facebooks of the world.”
  • A recent review found that 44 of 50 leading scientific journals instructed their authors on sharing data but that fewer than 30 percent of the papers they published fully adhered to the instructions. A 2008 review of sharing requirements for genetics data found that 40 of 70 journals surveyed had policies, and that 17 of those were “weak.”
6More

How the Professor Who Fooled Wikipedia Got Caught by Reddit - Yoni Appelbaum - Technolo... - 0 views

  • Last January, as he prepared to offer the class again, Kelly put the Internet on notice. He posted his syllabus and announced that his new, larger class was likely to create two separate hoaxes. He told members of the public to "consider yourself warned--twice."
  • One answer lies in the structure of the Internet's various communities. Wikipedia has a weak community, but centralizes the exchange of information. It has a small number of extremely active editors, but participation is declining, and most users feel little ownership of the content. And although everyone views the same information, edits take place on a separate page, and discussions of reliability on another, insulating ordinary users from any doubts that might be expressed. Facebook, where the Lincoln hoax took flight, has strong communities but decentralizes the exchange of information. Friends are quite likely to share content and to correct mistakes, but those corrections won't reach other users sharing or viewing the same content.
  • Reddit, by contrast, builds its strong community around the centralized exchange of information. Discussion isn't a separate activity but the sine qua non of the site. When one user voiced doubts, others saw the comment and quickly piled on.
  • ...3 more annotations...
  • Kelly's students, like all good con artists, built their stories out of small, compelling details to give them a veneer of veracity. Ultimately, though, they aimed to succeed less by assembling convincing stories than by exploiting the trust of their marks, inducing them to lower their guard. Most of us assess arguments, at least initially, by assessing those who make them. Kelly's students built blogs with strong first-person voices, and hit back hard at skeptics. Those inclined to doubt the stories were forced to doubt their authors. They inserted articles into Wikipedia, trading on the credibility of that site. And they aimed at very specific communities: the "beer lovers of Baltimore" and Reddit.
  • Reddit prides itself on winnowing the wheat from the chaff. It relies on the collective judgment of its members, who click on arrows next to contributions, elevating insightful or interesting content, and demoting less worthy contributions. Even Mills says he was impressed by the way in which redditors "marshaled their collective bits of expert knowledge to arrive at a conclusion that was largely correct." It's tough to con Reddit.
  • hoaxes tend to thrive in communities which exhibit high levels of trust. But on the Internet, where identities are malleable and uncertain, we all might be well advised to err on the side of skepticism.
10More

Getting It Right - NYTimes.com - 1 views

  • What is it to truly know something? In our daily lives, we might not give this much thought — most of us rely on what we consider to be fair judgment and common sense in establishing knowledge
  • is a form of action, comparable to an archer’s success when he consciously aims to hit a target.
  • fix the justified-true-belief account with minor modifications, philosophers tried more radical departures.
  • ...7 more annotations...
  • can be accurate (successful in hitting the target). It can also be adroit (skillful or competent). An archery shot is adroit only if, as the arrow leaves the bow, it is oriented well and powerfully enough. But a shot that is both accurate and adroit can still fall short.
  • fully successful attempt is good overall only if the agent’s goal is good enough. An attempt to murder an innocent person is not good even if it fully succeeds.
  • affirmation is one aimed at attaining truth, at getting it right
  • Virtue epistemology begins by recognizing assertions or affirmations
  • Virtue epistemology gives an AAA account of knowledge: to know affirmatively is to make an affirmation that is accurate (true) and adroit (which requires taking proper account of the evidence).
  • Requiring knowledge to be apt (in addition to accurate and adroit) reconfigures epistemology as the ethics of belief.
  • We now have an explanation for why you fail to know that someone in the cafe owns a Bentley, when your own Bentley has been destroyed by a bomb, but the barista happens to own one. Your belief in that case falls short of knowledge for the reason that it fails to be apt. You are right that someone in the cafe owns a Bentley, but the correctness of your belief does not manifest your cognitive or epistemic competence. You are right only because by epistemic luck the barista happens to own one.
9More

In Defense of a Loaded Word - NYTimes.com - 2 views

  • words take on meaning within a context. It might be true that you refer to your spouse as Baby. But were I to take this as license to do the same, you would most likely protest. Right names depend on right relationships, a fact so basic to human speech that without it, human language might well collapse. But as with so much of what we take as human, we seem to be in need of an African-American exception
  • This is the politics of respectability — an attempt to raise black people to a superhuman standard. In this case it means exempting black people from a basic rule of communication — that words take on meaning from context and relationship. But as in all cases of respectability politics, what we are really saying to black people is, “Be less human.” This is not a fight over civil rights; it’s an attempt to raise a double standard.
  • To prevent enabling oppression, we demand that black people be twice as good. To prevent verifying stereotypes, we pledge to never eat a slice a watermelon in front of white people
  • ...6 more annotations...
  • But white racism needs no verification from black people. And a scientific poll of right-thinking humans will always conclude that watermelon is awesome. That is because its taste and texture appeal to certain attributes that humans tend to find pleasurable. Humans also tend to find community to be pleasurable, and within the boundaries of community relationships, words — often ironic and self-deprecating — are always spoken that take on other meanings when uttered by others.
  • A separate and unequal standard for black people is always wrong. And the desire to ban the word “nigger” is not anti-racism, it is finishing school
  • I am certain that should I decide to join in, I would invite the same hard conversation that would greet me, should I ever call my father Billy.
  • A few summers ago one of my best friends invited me up to what he affectionately called his “white-trash cabin” in the Adirondacks. This was not how I described the outing to my family. Two of my Jewish acquaintances once joked that I’d “make a good Jew.” My retort was not, “Yeah, I certainly am good with money.
  • When Matt Barnes used the word “niggas” he was being inappropriate. When Richie Incognito and Riley Cooper used “nigger,” they were being violent and offensive. That we have trouble distinguishing the two evidences our discomfort with the great chasm between black and white America
  • That such a seemingly hateful word should return as a marker of nationhood and community confounds our very notions of power. “Nigger” is different because it is attached to one of the most vibrant cultures in the Western world. And yet the culture is inextricably linked to the violence that birthed us. “Nigger” is the border, the signpost that reminds us that the old crimes don’t disappear. It tells white people that, for all their guns and all their gold, there will always be places they can never go.
17More

The Narrative Frays for Theranos and Elizabeth Holmes - The New York Times - 1 views

  • Few people, let alone those just 31 years old, have amassed the accolades and riches bestowed on Elizabeth Holmes, founder and chief executive of the blood-testing start-up Theranos.
  • This year President Obama named her a United States ambassador for global entrepreneurship. She gave the commencement address at Pepperdine University. She was the youngest person ever to be awarded the Horatio Alger Award in recognition of “remarkable achievements accomplished through honesty, hard work, self-reliance and perseverance over adversity.” She is on the Board of Fellows of Harvard Medical School.
  • Time named her one of the 100 Most Influential People in the World this year. She was the subject of lengthy profiles in The New Yorker and Fortune. Over the last week, she appeared on the cover of T: The New York Times Style Magazine, and Glamour anointed her one of its eight Women of the Year. She has been on “Charlie Rose,” as well as on stage at the Clinton Global Initiative, the World Economic Forum at Davos and the Aspen Ideas Festival, among numerous other conferences.
  • ...14 more annotations...
  • Theranos, which she started after dropping out of Stanford at age 19, has raised more than $400 million in venture capital and has been valued at $9 billion, which makes Ms. Holmes’s 50 percent stake worth $4.5 billion. Forbes put her on the cover of its Forbes 400 issue, ranking her No. 121 on the list of wealthiest Americans.
  • Thanks to an investigative article in The Wall Street Journal this month by John Carreyrou, one of the company’s central claims, and the one most exciting to many investors and doctors, is being called into question. Theranos has acknowledged it was only running a limited number of tests on a microsample of blood using its finger-prick technology. Since then, it said it had stopped using its proprietary methods on all but one relatively simple test for herpes.
  • “The constant was that nobody had any idea how this works or even if it works,” Mr. Loria told me this week. “People in medicine couldn’t understand why the media and technology worlds were so in thrall to her.
  • that so many eminent authorities — from Henry Kissinger, who had served on the company’s board; to prominent investors like the Oracle founder Larry Ellison; to the Cleveland Clinic — appear to have embraced Theranos with minimal scrutiny is a testament to the ageless power of a great story.
  • Ms. Holmes seems to have perfectly executed the current Silicon Valley playbook: Drop out of a prestigious college to pursue an entrepreneurial vision; adopt an iconic uniform; embrace an extreme diet; and champion a humanitarian mission, preferably one that can be summed up in one catchy phrase.
  • She stays relentlessly on message, as a review of her numerous conference and TV appearances make clear, while at the same time saying little of scientific substance.
  • The natural human tendency to fit complex facts into a simple, compelling narrative has grown stronger in the digital age of 24/7 news and social media,
  • “We’re deluged with information even as pressure has grown to make snap decisions,”
  • “People see a TED talk. They hear this amazing story of a 30-something-year-old woman with a wonder procedure. They see the Cleveland Clinic is on board. A switch goes off and they make an instant decision that everything is fine. You see this over and over: Really smart and wealthy people start to believe completely implausible things with 100 percent certainty.”
  • Ms. Holmes’s story also fits into a broader narrative underway in medicine, in which new health care entrepreneurs are upending ossified hospital practices with the goal of delivering more effective and patient-oriented care.
  • as a medical technology company, Theranos has bumped up against something else: the scientific method, which puts a premium on verification over narrative.
  • “You have to subject yourself to peer review. You can’t just go in a stealthy mode and then announce one day that you’ve got technology that’s going to disrupt the world.”
  • Professor Yeo said that he and his colleagues wanted to see data and testing in independent labs. “We have a small army of people ready and willing to test Theranos’s products if they’d ask us,” he said. “And that can be done without revealing any trade secrets.”
  • “Every other company in this field has gone through peer review,” said Mr. Cherny of Evercore. “Why hold back so much of the platform if your goal is the greater good of humanity?”
9More

The Joy of Psyching Myself Out­ - The New York Times - 0 views

  • IS it possible to think scientifically and creatively at once? Can you be both a psychologist and a writer?
  • “A writer must be as objective as a chemist,” Anton Chekhov wrote in 1887. “He must abandon the subjective line; he must know that dung heaps play a very reasonable part in a landscape.”Chekhov’s chemist is a naturalist — someone who sees reality for what it is, rather than what it should be. In that sense, the starting point of the psychologist and the writer is the same: a curiosity that leads you to observe life in all its dimensions.
  • Without verification, we can’t always trust what we see — or rather, what we think we see. Whether we’re psychologists or writers (or anything else), our eyes are never the impartial eyes of Chekhov’s chemist. Our expectations, our wants and shoulds, get in the way. Take, once again, lying. Why do we think we know how liars behave? Liars should divert their eyes. They should feel ashamed and guilty and show the signs of discomfort that such feelings engender. And because they should, we think they do.
  • ...6 more annotations...
  • The desire for the world to be what it ought to be and not what it is permeates experimental psychology as much as writing, though. There’s experimental bias and the problem known in the field as “demand characteristics” — when researchers end up finding what they want to find by cuing participants to act a certain way. It’s also visible when psychologists choose to study one thing rather than another, dismiss evidence that doesn’t mesh with their worldview while embracing that which does. The subjectivity we tend to associate with the writerly way of looking may simply be more visible in that realm rather than exclusive to it.
  • “There is no other source of knowledge of the universe but the intellectual manipulation of carefully verified observations,” he said.
  • Intuition and inspiration, he went on, “can safely be counted as illusions, as fulfillments of wishes.” They are not to be relied on as evidence of any sort. “Science takes account of the fact that the mind of man creates such demands and is ready to trace their source, but it has not the slightest ground for thinking them justified.”
  • That is what both the psychologist and the writer should strive for: a self-knowledge that allows you to look in order to discover, without agenda, without preconception, without knowing or caring if what you’re seeing is wrong or right in your scheme of the world. It’s harder than it sounds. For one thing, you have to possess the self-knowledge that will allow you to admit when you’re wrong.
  • most new inquiries never happened — in a sense, it meant that objectivity was more an ideal than a reality. Each study was selected for a reason other than intrinsic interest.
  • Isolation precludes objectivity. It’s in the merging not simply of ways of seeing but also of modes of thought that a truly whole perception of reality may eventually emerge. Or at least that way we can realize its ultimate impossibility — and that’s not nothing, either.
13More

The Joy of Psyching Myself Out­ - The New York Times - 0 views

  • that neat separation is not just unwarranted; it’s destructive
  • Although it’s often presented as a dichotomy (the apparent subjectivity of the writer versus the seeming objectivity of the psychologist), it need not be.
  • IS it possible to think scientifically and creatively at once? Can you be both a psychologist and a writer?
  • ...10 more annotations...
  • “A writer must be as objective as a chemist,” Anton Chekhov wrote in 1887. “He must abandon the subjective line; he must know that dung heaps play a very reasonable part in a landscape.”
  • At the turn of the century, psychology was a field quite unlike what it is now. The theoretical musings of William James were the norm (a wry commenter once noted that William James was the writer, and his brother Henry, the psychologist)
  • Freud was a breed of psychologist that hardly exists anymore: someone who saw the world as both writer and psychologist, and for whom there was no conflict between the two. That boundary melding allowed him to posit the existence of cognitive mechanisms that wouldn’t be empirically proved for decades,
  • Freud got it brilliantly right and brilliantly wrong. The rightness is as good a justification as any of the benefits, the necessity even, of knowing how to look through the eyes of a writer. The wrongness is part of the reason that the distinction between writing and experimental psychology has grown far more rigid than it was a century ago.
  • the signs people associate with liars often have little empirical evidence to support them. Therein lies the psychologist’s distinct role and her necessity. As a writer, you look in order to describe, but you remain free to use that description however you see fit. As a psychologist, you look to describe, yes, but also to verify.
  • Without verification, we can’t always trust what we see — or rather, what we think we see.
  • The desire for the world to be what it ought to be and not what it is permeates experimental psychology as much as writing, though. There’s experimental bias and the problem known in the field as “demand characteristics” — when researchers end up finding what they want to find by cuing participants to act a certain way.
  • IN 1932, when he was in his 70s, Freud gave a series of lectures on psychoanalysis. In his final talk, “A Philosophy of Life,” he focused on clarifying an important caveat to his research: His followers should not be confused by the seemingly internal, and thus possibly subjective, nature of his work. “There is no other source of knowledge of the universe but the intellectual manipulation of carefully verified observations,” he said.
  • That is what both the psychologist and the writer should strive for: a self-knowledge that allows you to look in order to discover, without agenda, without preconception, without knowing or caring if what you’re seeing is wrong or right in your scheme of the world. It’s harder than it sounds. For one thing, you have to possess the self-knowledge that will allow you to admit when you’re wrong.
  • Even with the best intentions, objectivity can prove a difficult companion. I left psychology behind because I found its structural demands overly hampering. I couldn’t just pursue interesting lines of inquiry; I had to devise a set of experiments, see how feasible they were, both technically and financially, consider how they would reflect on my career. That meant that most new inquiries never happened — in a sense, it meant that objectivity was more an ideal than a reality. Each study was selected for a reason other than intrinsic interest.
55More

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
14More

The View from Nowhere: Questions and Answers » Pressthink - 2 views

  • In pro journalism, American style, the View from Nowhere is a bid for trust that advertises the viewlessness of the news producer. Frequently it places the journalist between polarized extremes, and calls that neither-nor position “impartial.” Second, it’s a means of defense against a style of criticism that is fully anticipated: charges of bias originating in partisan politics and the two-party system. Third: it’s an attempt to secure a kind of universal legitimacy that is implicitly denied to those who stake out positions or betray a point of view. American journalists have almost a lust for the View from Nowhere because they think it has more authority than any other possible stance.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • Q. What does it say? # A. It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.” #
  • ...11 more annotations...
  • But there are limits to this motion. We can’t transcend all our starting points. No matter how far it pulls back the camera is still occupying a position. We can’t actually take the “view from nowhere,” but this doesn’t mean that objectivity is a lie or an illusion. Our ability to step back and the fact that there are limits to it– both are real. And realism demands that we acknowledge both.
  • Q. So is objectivity a myth… or not? # A. One of the many interesting things Nagel says in that book is that “objectivity is both underrated and overrated, sometimes by the same persons.” It’s underrated by those who scoff at it as a myth. It is overrated by people who think it can replace the view from somewhere or transcend the human subject. It can’t.
  • When MSNBC suspends Keith Olbermann for donating without company permission to candidates he supports– that’s dumb. When NPR forbids its “news analysts” from expressing a view on matters they are empowered to analyze– that’s dumb. When reporters have to “launder” their views by putting them in the mouths of think tank experts: dumb. When editors at the Washington Post decline even to investigate whether the size of rallies on the Mall can be reliably estimated because they want to avoid charges of “leaning one way or the other,” as one of them recently put it, that is dumb. When CNN thinks that, because it’s not MSNBC and it’s not Fox, it’s the only the “real news network” on cable, CNN is being dumb about itself.
  • Let some in the press continue on with the mask of impartiality, which has advantages for cultivating sources and soothing advertisers. Let others experiment with transparency as the basis for trust. When you click on their by-line it takes you to a disclosure page where there is a bio, a kind of mission statement, and a creative attempt to say: here’s where I’m coming from (one example) along with campaign contributions, any affiliations or memberships, and–I’m just speculating now–a list of heroes and villains, or major influences, along with an archive of the work, plus anything else that might assist the user in placing this person on the user’s mattering map.
  • it has unearned authority in the American press. If in doing the serious work of journalism–digging, reporting, verification, mastering a beat–you develop a view, expressing that view does not diminish your authority. It may even add to it. The View from Nowhere doesn’t know from this. It also encourages journalists to develop bad habits. Like: criticism from both sides is a sign that you’re doing something right, when you could be doing everything wrong.
  • If it means trying to see things in that fuller perspective Thomas Nagel talked about–pulling the camera back, revealing our previous position as only one of many–I second the motion. If it means the struggle to get beyond the limited perspective that our experience and upbringing afford us… yeah, we need more of that, not less. I think there is value in acts of description that do not attempt to say whether the thing described is good or bad
  • I think we are in the midst of shift in the system by which trust is sustained in professional journalism. David Weinberger tried to capture it with his phrase: transparency is the new objectivity. My version of that: it’s easier to trust in “here’s where I’m coming from” than the View from Nowhere. These are two different ways of bidding for the confidence of the users.
  • In the newer way, the logic is different. “Look, I’m not going to pretend that I have no view. Instead, I am going to level with you about where I’m coming from on this. So factor that in when you evaluate my report. Because I’ve done the work and this is what I’ve concluded…”
  • if objectivity means trying to ground truth claims in verifiable facts, I am definitely for that. If it means there’s a “hard” reality out there that exists beyond any of our descriptions of it, sign me up. If objectivity is the requirement to acknowledge what is, regardless of whether we want it to be that way, then I want journalists who can be objective in that sense.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.”
13More

A News Organization That Rejects the View From Nowhere - Conor Friedersdorf - The Atlantic - 1 views

  • For many years, Rosen has been a leading critic of what he calls The View From Nowhere, or the conceit that journalists bring no prior commitments to their work. On his long-running blog, PressThink, he's advocated for "The View From Somewhere"—an effort by journalists to be transparent about their priors, whether ideological or otherwise.  Rosen is just one of several voices who'll shape NewCo. Still, the new venture may well be a practical test of his View from Somewhere theory of journalism. I chatted with Rosen about some questions he'll face. 
  • The View from Nowhere won’t be a requirement for our journalists. Nor will a single ideology prevail. NewCo itself will have a view of the world: Accountability journalism, exposing abuses of power, revealing injustices will no doubt be part of it. Under that banner many “views from somewhere” can fit.
  • The way "objectivity" evolves historically is out of something much more defensible and interesting, which is in that phrase "Of No Party or Clique." That's the founders of The Atlantic saying they want to be independent of party politics. They don't claim to have no politics, do they? They simply say: We're not the voice of an existing faction or coalition. But they're also not the Voice of God.
  • ...10 more annotations...
  • NewCo will emulate the founders of The Atlantic. At some point "independent from" turned into "objective about." That was the wrong turn, made long ago, by professional journalism, American-style.
  • You've written that The View From Nowhere is, in part, a defense mechanism against charges of bias originating in partisan politics. If you won't be invoking it, what will your defense be when those charges happen? There are two answers to that. 1) We told you where we're coming from. 2) High standards of verification. You need both.
  • What about ideological diversity? The View from Somewhere obviously permits it. You've said you'll have it. Is that because it is valuable in itself?
  • The basic insight is correct: Since "news judgment" is judgment, the product is improved when there are multiple perspectives at the table ... But, if the people who are recruited to the newsroom because they add perspectives that might otherwise be overlooked are also taught that they should leave their politics at the door, or think like professional journalists rather than representatives or their community, or privilege something called "news values" over the priorities they had when they decided to become journalists, then these people are being given a fatally mixed message, if you see what I mean. They are valued for the perspective they bring, and then told that they should transcend that perspective.
  • When people talk about objectivity in journalism they have many different things in mind. Some of these I have no quarrel with. You could even say I’m a “fan.” For example, if objectivity means trying to ground truth claims in verifiable facts, I am definitely for that. If it means there’s a “hard” reality out there that exists beyond any of our descriptions of it, sign me up. If objectivity is the requirement to acknowledge what is, regardless of whether we want it to be that way, then I want journalists who can be objective in that sense. Don’t you? If it means trying to see things in that fuller perspective Thomas Nagel talked about–pulling the camera back, revealing our previous position as only one of many–I second the motion. If it means the struggle to get beyond the limited perspective that our experience and upbringing afford us… yeah, we need more of that, not less. I think there is value in acts of description that do not attempt to say whether the thing described is good or bad. Is that objectivity? If so, I’m all for it, and I do that myself sometimes. 
  • By "we can do better than that" I mean: We can insist on the struggle to tell it like it is without also insisting on the View from Nowhere. The two are not connected. It was a mistake to think that they necessarily are. But why was this mistake made? To control people in the newsroom from "above." That's a big part of objectivity. Not truth. Control.
  • If it works out as you hope, if things are implemented well, etc., what's the potential payoff for readers? I think it's three things: First, this is a news site that is born into the digital world, but doesn't have to return profits to investors. That's not totally unique
  • Second: It's going to be a technology company as much as a news organization. That should result in better service.
  • a good formula for innovation is to start with something people want to do and eliminate some of the steps required to do it
  • The third upside is news with a human voice restored to it. This is the great lesson that blogging gives to journalism
7More

Why we believe fake news - BBC Future - 0 views

  • Less commonplace is the acknowledgement that human judgements also rely upon secondary information that doesn’t come from any external source – and that offers one of the most powerful tools we possess for dealing with the deluge itself. This source is social information. Or, in other words: what we think other people are thinking.
  • Your senses inform you that other people are moving frantically. But it’s the social interpretation you put on this information that tells you what you most need to know: these people believe that something bad is happening, and this means you should probably be trying to escape too.
  • Assuming they have no first-hand knowledge of the claim, it’s theoretically possible for them to look it up elsewhere – a process of laborious verification that involves trawling through countless claims and counter-claims. They also, however, possess a far simpler method of evaluation, which is to ask what other people seem to think.
  • ...4 more annotations...
  • As Hendricks and Hansen put it, “when you don’t possess sufficient information to solve a given problem, or if you just don’t want to or have the time for processing it, then it can be rational to imitate others by way of social proof”
  • When we either know very little about something, or the information surrounding it is overwhelming, it makes excellent sense to look to others’ apparent beliefs as an indication of what is going on. In fact, this is often the most reasonable response, so long as we have good reason to believe that others have access to accurate information; and that what they seem to think and what they actually believe are the same.
  • Networks where members are, for example, randomly exposed to a range of views are less likely to experience cascades of unchallenged belief
  • And the more we understand the chain of events that led someone towards a particular perspective, the more we understand what it might mean to arrive at other views – or, equally importantly, to sow the seeds of sceptical engagement.
53More

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
1More

Buy Verified PayPal Account - Old/New USA, UK, CA Countriest - 0 views

  •  
    Do you want to use a PayPal account for a long time? Buy verified paypal account from us. We will give you a paypal account that are fully verif ...
4More

Twitter launches a crisis misinformation policy - CNN - 0 views

  • Washington (CNN Business)Twitter will now apply warning labels to — and cease recommending — claims that outside experts have identified as misinformation during fast-moving times of crisis, the social media company said Thursday.
  • The platform's new crisis misinformation policy is designed to slow the spread of viral falsehoods during natural disasters, armed conflict and public health emergencies, the company announced.
  • "To determine whether claims are misleading, we require verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more," Twitter's head of safety and integrity, Yoel Roth, wrote in a blog post.
  • ...1 more annotation...
  • It also comes amid an ongoing, global battle over the future of platform moderation, with officials in Europe seeking to heighten standards surrounding tech companies' content decision-making and lawmakers in many US states seeking to force platforms to moderate less.
21More

Musk, SBF, and the Myth of Smug, Castle-Building Nerds - 0 views

  • Experts in content moderation suggested that Musk’s actual policies lacked any coherence and, if implemented, would have all kinds of unintended consequences. That has happened with verification. Almost every decision he makes is an unforced error made with extreme confidence in front of a growing audience of people who already know he has messed up, and is supported by a network of sycophants and blind followers who refuse to see or tell him that he’s messing up. The dynamic is … very Trumpy!
  • As with the former president, it can be hard at times for people to believe or accept that our systems are so broken that a guy who is clearly this inept can also be put in charge of something so important. A common pundit claim before Donald Trump got into the White House was that the gravity of the job and prestige of the office might humble or chasten him.
  • The same seems true for Musk. Even people skeptical of Musk’s behavior pointed to his past companies as predictors of future success. He’s rich. He does smart-people stuff. The rockets land pointy-side up!
  • ...18 more annotations...
  • Time and again, we learned there was never a grand plan or big ideas—just weapons-grade ego, incompetence, thin skin, and prejudice against those who don’t revere him.
  • Despite all the incredible, damning reporting coming out of Twitter and all of Musk’s very public mistakes, many people still refuse to believe—even if they detest him—that he is simply incompetent.
  • What is amazing about the current moment is that, despite how ridiculous it all feels, a fundamental tenet of reality and logic appears to be holding true: If you don’t know what you’re doing or don’t really care, you’ll run the thing you’re in charge of into the ground, and people will notice.
  • And so the moment feels too dumb and too on the nose to be real and yet also very real—kind of like all of reality in 2022.
  • I don’t really know where any of this will lead, but one interesting possibility is that Musk gets increasingly reactionary and trollish in his politics and stewardship of Twitter.
  • Leaving the politics aside, from a basic customer-service standpoint this is generally an ill-advised way for the owner of a company to treat an elected official when that elected official wishes to know why your service has failed them. The reason it is ill-advised is because then the elected official could tweet something like what Senator Markey tweeted on Sunday: “One of your companies is under an FTC consent decree. Auto safety watchdog NHTSA is investigating another for killing people. And you’re spending your time picking fights online. Fix your companies. Or Congress will.”
  • It seems clear that Musk, like any dedicated social-media poster, thrives on validation, so it makes sense that, as he continues to dismantle his own mystique as an innovator, he might look for adoration elsewhere
  • Recent history has shown that, for a specific audience, owning the libs frees a person from having to care about competency or outcome of their actions. Just anger the right people and you’re good, even if you’re terrible at your job. This won’t help Twitter’s financial situation, which seems bleak, but it’s … something!
  • Bankman-Fried, the archetype, appealed to people for all kinds of reasons. His narrative as a philanthropist, and a smart rationalist, and a stone-cold weirdo was something people wanted to buy into because, generally, people love weirdos who don’t conform to systems and then find clever ways to work around them and become wildly successful as a result.
  • Bankman-Fried was a way that a lot of people could access and maybe obliquely understand what was going on in crypto. They may not have understood what FTX did, but they could grasp a nerd trying to leverage a system in order to do good in the world and advance progressive politics. In that sense, Bankman-Fried is easy to root for and exciting to cover. His origin story and narrative become more important than the particulars of what he may or may not be doing.
  • the past few weeks have been yet another reminder that the smug-nerd-genius narrative may sell magazines, and it certainly raises venture funding, but the visionary founder is, first and foremost, a marketing product, not a reality. It’s a myth that perpetuates itself. Once branded a visionary, the founder can use the narrative to raise money and generate a formidable net worth, and then the financial success becomes its own résumé. But none of it is real.
  • Adversarial journalism ideally questions and probes power. If it is trained on technology companies and their founders, it is because they either wield that power or have the potential to do so. It is, perhaps unintuitively, a form of respect for their influence and potential to disrupt. But that’s not what these founders want.
  • even if all tech coverage had been totally flawless, Silicon Valley would have rejected adversarial tech journalism because most of its players do not actually want the responsibility that comes with their potential power. They want only to embody the myth and reap the benefits. They want the narrative, which is focused on origins, ambitions, ethos, and marketing, and less on the externalities and outcomes.
  • Looking at Musk and Bankman-Fried, it would appear that the tech visionaries mostly get their way. For all the complaints of awful, negative coverage and biased reporting, people still want to cheer for and give money to the “‘smug nerds building castles in the sky.’” Though they vary wildly right now in magnitude, their wounds are self-inflicted—and, perhaps, the result of believing their own hype.
  • That’s because, almost always, the smug-nerd-genius narrative is a trap. It’s one that people fall into because they need to believe that somebody out there is so brilliant, they can see the future, or that they have some greater, more holistic understanding of the world (or that such an understanding is possible)
  • It’s not unlike a conspiracy theory in that way. The smug-nerd-genius narrative helps take the complexity of the world and make it more manageable.
  • Putting your faith in a space billionaire or a crypto wunderkind isn’t just sad fanboydom; it is also a way for people to outsource their brain to somebody else who, they believe, can see what they can’t
  • the smug nerd genius is exceedingly rare, and, even when they’re not outed as a fraud or a dilettante, they can be assholes or flawed like anyone else. There aren’t shortcuts for making sense of the world, and anyone who is selling themselves that way or buying into that narrative about them should read to us as a giant red flag.
121More

Why the Past 10 Years of American Life Have Been Uniquely Stupid - The Atlantic - 0 views

  • Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust), strong institutions, and shared stories.
  • Social media has weakened all three.
  • gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will.
  • ...118 more annotations...
  • the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.
  • Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom
  • That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers.
  • “Like” and “Share” buttons quickly became standard features of most other platforms.
  • Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well.
  • Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.
  • By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous”
  • If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.
  • This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment,
  • As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.
  • It was just this kind of twitchy and explosive spread of anger that James Madison had tried to protect us from as he was drafting the U.S. Constitution.
  • The Framers of the Constitution were excellent social psychologists. They knew that democracy had an Achilles’ heel because it depended on the collective judgment of the people, and democratic communities are subject to “the turbulency and weakness of unruly passions.”
  • The key to designing a sustainable republic, therefore, was to build in mechanisms to slow things down, cool passions, require compromise, and give leaders some insulation from the mania of the moment while still holding them accountable to the people periodically, on Election Day.
  • The tech companies that enhanced virality from 2009 to 2012 brought us deep into Madison’s nightmare.
  • a less quoted yet equally important insight, about democracy’s vulnerability to triviality.
  • Madison notes that people are so prone to factionalism that “where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.”
  • Social media has both magnified and weaponized the frivolous.
  • It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust.
  • a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.
  • when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side
  • The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia).
  • The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the spread of misinformation.
  • When people lose trust in institutions, they lose trust in the stories told by those institutions. That’s particularly true of the institutions entrusted with the education of children.
  • Facebook and Twitter make it possible for parents to become outraged every day over a new snippet from their children’s history lessons––and math lessons and literature selections, and any new pedagogical shifts anywhere in the country
  • The motives of teachers and administrators come into question, and overreaching laws or curricular reforms sometimes follow, dumbing down education and reducing trust in it further.
  • young people educated in the post-Babel era are less likely to arrive at a coherent story of who we are as a people, and less likely to share any such story with those who attended different schools or who were educated in a different decade.
  • former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book, The Revolt of the Public. Gurri’s analysis focused on the authority-subverting effects of information’s exponential growth, beginning with the internet in the 1990s. Writing nearly a decade ago, Gurri could already see the power of social media as a universal solvent, breaking down bonds and weakening institutions everywhere it reached.
  • he notes a constructive feature of the pre-digital era: a single “mass audience,” all consuming the same content, as if they were all looking into the same gigantic mirror at the reflection of their own society. I
  • The digital revolution has shattered that mirror, and now the public inhabits those broken pieces of glass. So the public isn’t one thing; it’s highly fragmented, and it’s basically mutually hostile
  • Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.
  • I think we can date the fall of the tower to the years between 2011 (Gurri’s focal year of “nihilistic” protests) and 2015, a year marked by the “great awokening” on the left and the ascendancy of Donald Trump on the right.
  • Twitter can overpower all the newspapers in the country, and stories cannot be shared (or at least trusted) across more than a few adjacent fragments—so truth cannot achieve widespread adherence.
  • fter Babel, nothing really means anything anymore––at least not in a way that is durable and on which people widely agree.
  • Politics After Babel
  • “Politics is the art of the possible,” the German statesman Otto von Bismarck said in 1867. In a post-Babel democracy, not much may be possible.
  • The ideological distance between the two parties began increasing faster in the 1990s. Fox News and the 1994 “Republican Revolution” converted the GOP into a more combative party.
  • So cross-party relationships were already strained before 2009. But the enhanced virality of social media thereafter made it more hazardous to be seen fraternizing with the enemy or even failing to attack the enemy with sufficient vigor.
  • What changed in the 2010s? Let’s revisit that Twitter engineer’s metaphor of handing a loaded gun to a 4-year-old. A mean tweet doesn’t kill anyone; it is an attempt to shame or punish someone publicly while broadcasting one’s own virtue, brilliance, or tribal loyalties. It’s more a dart than a bullet
  • from 2009 to 2012, Facebook and Twitter passed out roughly 1 billion dart guns globally. We’ve been shooting one another ever since.
  • “devoted conservatives,” comprised 6 percent of the U.S. population.
  • the warped “accountability” of social media has also brought injustice—and political dysfunction—in three ways.
  • First, the dart guns of social media give more power to trolls and provocateurs while silencing good citizens.
  • a small subset of people on social-media platforms are highly concerned with gaining status and are willing to use aggression to do so.
  • Across eight studies, Bor and Petersen found that being online did not make most people more aggressive or hostile; rather, it allowed a small number of aggressive people to attack a much larger set of victims. Even a small number of jerks were able to dominate discussion forums,
  • Additional research finds that women and Black people are harassed disproportionately, so the digital public square is less welcoming to their voices.
  • Second, the dart guns of social media give more power and voice to the political extremes while reducing the power and voice of the moderate majority.
  • The “Hidden Tribes” study, by the pro-democracy group More in Common, surveyed 8,000 Americans in 2017 and 2018 and identified seven groups that shared beliefs and behaviors.
  • Social media has given voice to some people who had little previously, and it has made it easier to hold powerful people accountable for their misdeeds
  • The group furthest to the left, the “progressive activists,” comprised 8 percent of the population. The progressive activists were by far the most prolific group on social media: 70 percent had shared political content over the previous year. The devoted conservatives followed, at 56 percent.
  • These two extreme groups are similar in surprising ways. They are the whitest and richest of the seven groups, which suggests that America is being torn apart by a battle between two subsets of the elite who are not representative of the broader society.
  • they are the two groups that show the greatest homogeneity in their moral and political attitudes.
  • likely a result of thought-policing on social media:
  • political extremists don’t just shoot darts at their enemies; they spend a lot of their ammunition targeting dissenters or nuanced thinkers on their own team.
  • Finally, by giving everyone a dart gun, social media deputizes everyone to administer justice with no due process. Platforms like Twitter devolve into the Wild West, with no accountability for vigilantes.
  • Enhanced-virality platforms thereby facilitate massive collective punishment for small or imagined offenses, with real-world consequences, including innocent people losing their jobs and being shamed into suicide
  • we don’t get justice and inclusion; we get a society that ignores context, proportionality, mercy, and truth.
  • Since the tower fell, debates of all kinds have grown more and more confused. The most pervasive obstacle to good thinking is confirmation bias, which refers to the human tendency to search only for evidence that confirms our preferred beliefs
  • search engines were supercharging confirmation bias, making it far easier for people to find evidence for absurd beliefs and conspiracy theorie
  • The most reliable cure for confirmation bias is interaction with people who don’t share your beliefs. They confront you with counterevidence and counterargument.
  • In his book The Constitution of Knowledge, Jonathan Rauch describes the historical breakthrough in which Western societies developed an “epistemic operating system”—that is, a set of institutions for generating knowledge from the interactions of biased and cognitively flawed individuals
  • English law developed the adversarial system so that biased advocates could present both sides of a case to an impartial jury.
  • Newspapers full of lies evolved into professional journalistic enterprises, with norms that required seeking out multiple sides of a story, followed by editorial review, followed by fact-checking.
  • Universities evolved from cloistered medieval institutions into research powerhouses, creating a structure in which scholars put forth evidence-backed claims with the knowledge that other scholars around the world would be motivated to gain prestige by finding contrary evidence.
  • Part of America’s greatness in the 20th century came from having developed the most capable, vibrant, and productive network of knowledge-producing institutions in all of human history
  • But this arrangement, Rauch notes, “is not self-maintaining; it relies on an array of sometimes delicate social settings and understandings, and those need to be understood, affirmed, and protected.”
  • This, I believe, is what happened to many of America’s key institutions in the mid-to-late 2010s. They got stupider en masse because social media instilled in their members a chronic fear of getting darted
  • it was so pervasive that it established new behavioral norms backed by new policies seemingly overnight
  • Participants in our key institutions began self-censoring to an unhealthy degree, holding back critiques of policies and ideas—even those presented in class by their students—that they believed to be ill-supported or wrong.
  • The stupefying process plays out differently on the right and the left because their activist wings subscribe to different narratives with different sacred values.
  • The “Hidden Tribes” study tells us that the “devoted conservatives” score highest on beliefs related to authoritarianism. They share a narrative in which America is eternally under threat from enemies outside and subversives within; they see life as a battle between patriots and traitors.
  • they are psychologically different from the larger group of “traditional conservatives” (19 percent of the population), who emphasize order, decorum, and slow rather than radical change.
  • The traditional punishment for treason is death, hence the battle cry on January 6: “Hang Mike Pence.”
  • Right-wing death threats, many delivered by anonymous accounts, are proving effective in cowing traditional conservatives
  • The wave of threats delivered to dissenting Republican members of Congress has similarly pushed many of the remaining moderates to quit or go silent, giving us a party ever more divorced from the conservative tradition, constitutional responsibility, and reality.
  • The stupidity on the right is most visible in the many conspiracy theories spreading across right-wing media and now into Congress.
  • The Democrats have also been hit hard by structural stupidity, though in a different way. In the Democratic Party, the struggle between the progressive wing and the more moderate factions is open and ongoing, and often the moderates win.
  • The problem is that the left controls the commanding heights of the culture: universities, news organizations, Hollywood, art museums, advertising, much of Silicon Valley, and the teachers’ unions and teaching colleges that shape K–12 education. And in many of those institutions, dissent has been stifled:
  • Liberals in the late 20th century shared a belief that the sociologist Christian Smith called the “liberal progress” narrative, in which America used to be horrifically unjust and repressive, but, thanks to the struggles of activists and heroes, has made (and continues to make) progress toward realizing the noble promise of its founding.
  • It is also the view of the “traditional liberals” in the “Hidden Tribes” study (11 percent of the population), who have strong humanitarian values, are older than average, and are largely the people leading America’s cultural and intellectual institutions.
  • when the newly viralized social-media platforms gave everyone a dart gun, it was younger progressive activists who did the most shooting, and they aimed a disproportionate number of their darts at these older liberal leaders.
  • Confused and fearful, the leaders rarely challenged the activists or their nonliberal narrative in which life at every institution is an eternal battle among identity groups over a zero-sum pie, and the people on top got there by oppressing the people on the bottom. This new narrative is rigidly egalitarian––focused on equality of outcomes, not of rights or opportunities. It is unconcerned with individual rights.
  • The universal charge against people who disagree with this narrative is not “traitor”; it is “racist,” “transphobe,” “Karen,” or some related scarlet letter marking the perpetrator as one who hates or harms a marginalized group.
  • The punishment that feels right for such crimes is not execution; it is public shaming and social death.
  • anyone on Twitter had already seen dozens of examples teaching the basic lesson: Don’t question your own side’s beliefs, policies, or actions. And when traditional liberals go silent, as so many did in the summer of 2020, the progressive activists’ more radical narrative takes over as the governing narrative of an organization.
  • This is why so many epistemic institutions seemed to “go woke” in rapid succession that year and the next, beginning with a wave of controversies and resignations at The New York Times and other newspapers, and continuing on to social-justice pronouncements by groups of doctors and medical associations
  • The problem is structural. Thanks to enhanced-virality social media, dissent is punished within many of our institutions, which means that bad ideas get elevated into official policy.
  • In a 2018 interview, Steve Bannon, the former adviser to Donald Trump, said that the way to deal with the media is “to flood the zone with shit.” He was describing the “firehose of falsehood” tactic pioneered by Russian disinformation programs to keep Americans confused, disoriented, and angry.
  • artificial intelligence is close to enabling the limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give it a topic and a tone and it will spit out as many essays as you like, typically with perfect grammar and a surprising level of coherence.
  • Renée DiResta, the research manager at the Stanford Internet Observatory, explained that spreading falsehoods—whether through text, images, or deep-fake videos—will quickly become inconceivably easy. (She co-wrote the essay with GPT-3.)
  • American factions won’t be the only ones using AI and social media to generate attack content; our adversaries will too.
  • In the 20th century, America’s shared identity as the country leading the fight to make the world safe for democracy was a strong force that helped keep the culture and the polity together.
  • In the 21st century, America’s tech companies have rewired the world and created products that now appear to be corrosive to democracy, obstacles to shared understanding, and destroyers of the modern tower.
  • What changes are needed?
  • I can suggest three categories of reforms––three goals that must be achieved if democracy is to remain viable in the post-Babel era.
  • We must harden democratic institutions so that they can withstand chronic anger and mistrust, reform social media so that it becomes less socially corrosive, and better prepare the next generation for democratic citizenship in this new age.
  • Harden Democratic Institutions
  • we must reform key institutions so that they can continue to function even if levels of anger, misinformation, and violence increase far above those we have today.
  • Reforms should reduce the outsize influence of angry extremists and make legislators more responsive to the average voter in their district.
  • One example of such a reform is to end closed party primaries, replacing them with a single, nonpartisan, open primary from which the top several candidates advance to a general election that also uses ranked-choice voting
  • A second way to harden democratic institutions is to reduce the power of either political party to game the system in its favor, for example by drawing its preferred electoral districts or selecting the officials who will supervise elections
  • These jobs should all be done in a nonpartisan way.
  • Reform Social Media
  • Social media’s empowerment of the far left, the far right, domestic trolls, and foreign agents is creating a system that looks less like democracy and more like rule by the most aggressive.
  • it is within our power to reduce social media’s ability to dissolve trust and foment structural stupidity. Reforms should limit the platforms’ amplification of the aggressive fringes while giving more voice to what More in Common calls “the exhausted majority.”
  • the main problem with social media is not that some people post fake or toxic stuff; it’s that fake and outrage-inducing content can now attain a level of reach and influence that was not possible before
  • Perhaps the biggest single change that would reduce the toxicity of existing platforms would be user verification as a precondition for gaining the algorithmic amplification that social media offers.
  • One of the first orders of business should be compelling the platforms to share their data and their algorithms with academic researchers.
  • Prepare the Next Generation
  • Childhood has become more tightly circumscribed in recent generations––with less opportunity for free, unstructured play; less unsupervised time outside; more time online. Whatever else the effects of these shifts, they have likely impeded the development of abilities needed for effective self-governance for many young adults
  • Depression makes people less likely to want to engage with new people, ideas, and experiences. Anxiety makes new things seem more threatening. As these conditions have risen and as the lessons on nuanced social behavior learned through free play have been delayed, tolerance for diverse viewpoints and the ability to work out disputes have diminished among many young people
  • Students did not just say that they disagreed with visiting speakers; some said that those lectures would be dangerous, emotionally devastating, a form of violence. Because rates of teen depression and anxiety have continued to rise into the 2020s, we should expect these views to continue in the generations to follow, and indeed to become more severe.
  • The most important change we can make to reduce the damaging effects of social media on children is to delay entry until they have passed through puberty.
  • The age should be raised to at least 16, and companies should be held responsible for enforcing it.
  • et them out to play. Stop starving children of the experiences they most need to become good citizens: free play in mixed-age groups of children with minimal adult supervision
  • while social media has eroded the art of association throughout society, it may be leaving its deepest and most enduring marks on adolescents. A surge in rates of anxiety, depression, and self-harm among American teens began suddenly in the early 2010s. (The same thing happened to Canadian and British teens, at the same time.) The cause is not known, but the timing points to social media as a substantial contributor—the surge began just as the large majority of American teens became daily users of the major platforms.
  • What would it be like to live in Babel in the days after its destruction? We know. It is a time of confusion and loss. But it is also a time to reflect, listen, and build.
  • In recent years, Americans have started hundreds of groups and organizations dedicated to building trust and friendship across the political divide, including BridgeUSA, Braver Angels (on whose board I serve), and many others listed at BridgeAlliance.us. We cannot expect Congress and the tech companies to save us. We must change ourselves and our communities.
  • when we look away from our dysfunctional federal government, disconnect from social media, and talk with our neighbors directly, things seem more hopeful. Most Americans in the More in Common report are members of the “exhausted majority,” which is tired of the fighting and is willing to listen to the other side and compromise. Most Americans now see that social media is having a negative impact on the country, and are becoming more aware of its damaging effects on children.
13More

Elon Musk May Kill Us Even If Donald Trump Doesn't - 0 views

  • In his extraordinary 2021 book, The Constitution of Knowledge: A Defense of Truth, Jonathan Rauch, a scholar at Brookings, writes that modern societies have developed an implicit “epistemic” compact–an agreement about how we determine truth–that rests on a broad public acceptance of science and reason, and a respect and forbearance towards institutions charged with advancing knowledge.
  • Today, Rauch writes, those institutions have given way to digital “platforms” that traffic in “information” rather than knowledge and disseminate that information not according to its accuracy but its popularity. And what is popular is sensation, shock, outrage. The old elite consensus has given way to an algorithm. Donald Trump, an entrepreneur of outrage, capitalized on the new technology to lead what Rauch calls “an epistemic secession.”
  • Rauch foresees the arrival of “Internet 3.0,” in which the big companies accept that content regulation is in their interest and erect suitable “guardrails.” In conversation with me, Rauch said that social media companies now recognize that their algorithm are “toxic,” and spoke hopefully of alternative models like Mastodon, which eschews algorithms and allows users to curate their own feeds
  • ...10 more annotations...
  • In an Atlantic essay, “Why The Past Ten Years of American Life have Been Uniquely Stupid,” and in a follow-up piece, Haidt argued that the Age of Gutenberg–of books and the depth understanding that comes with them–ended somewhere around 2014 with the rise of “Share,” “Like” and “Retweet” buttons that opened the way for trolls, hucksters and Trumpists
  • The new age of “hyper-virality,” he writes, has given us both January 6 and cancel culture–ugly polarization in both directions. On the subject of stupidification, we should add the fact that high school students now get virtually their entire stock of knowledge about the world from digital platforms.
  • Haidt proposed several reforms, including modifying Facebook’s “Share” function and requiring “user verification” to get rid of trolls. But he doesn’t really believe in his own medicine
  • Haidt said that the era of “shared understanding” is over–forever. When I asked if he could envision changes that would help protect democracy, Haidt quoted Goldfinger: “Do you expect me to talk?” “No, Mr. Bond, I expect you to die!”
  • Social media is a public health hazard–the cognitive equivalent of tobacco and sugary drinks. Adopting a public health model, we could, for examople, ban the use of algorithms to reduce virality, or even require social media platforms to adopt a subscription rather than advertising revenue model and thus remove their incentive to amass ev er more eyeballs.
  • We could, but we won’t, because unlike other public health hazards, digital platforms are forms of speech. Fox New is probably responsible for more polarization than all social media put together, but the federal government could not compel it–and all other media firms–to change its revenue model.
  • If Mark Zuckerberg or Elon Musk won’t do so out of concern for the public good–a pretty safe bet–they could be compelled to do so only by public or competitive pressure. 
  • Taiwan has provide resilient because its society is resilient; people reject China’s lies. We, here, don’t lack for fact-checkers, but rather for people willing to believe them. The problem is not the technology, but ourselves.
  • you have to wonder if people really are repelled by our poisonous discourse, or by the hailstorm of disinformation, or if they just want to live comfortably inside their own bubble, and not somebody else’
  • If Jonathan Haidt is right, it’s not because we’ve created a self-replicating machine that is destined to annihilate reason; it’s because we are the self-replicating machine.
1 - 20 of 20
Showing 20 items per page