Skip to main content

Home/ TOK Friends/ Group items tagged verified

Rss Feed Group items tagged

erickjhonkdkk64

Buy Verified PayPal Account - Old/New USA, UK, CA Countriest - 0 views

  •  
    Do you want to use a PayPal account for a long time? Buy verified paypal account from us. We will give you a paypal account that are fully verif ...
yelpreviews57

Buy Verified PayPal Account - UK - 0 views

  •  
    No! We provide personal and business both of accounts. We use personal documents for personal account verification. And we use business documents for business PayPal account verification. We don't use fake documents for verification. We use USA/UK/CA original citizen documents. That's why our all-verified PayPal accounts you can use till minimum 2 years without getting any problem.
erickjhonkdkk86

The Importance Of Buying A Verified Cashapp Account by Leo Royer on Dribbble - 0 views

  •  
    Are you looking to buy verified cashapp accounts with BTC enable? We are able to provide you btc enable cashapp account at a reasonable price
Emily Freilich

Scientists Explain their Processes with a little too much honesty - 0 views

  •  
    Satirical view of how scientists work; we think scientists use through methods all the time, but can we really know they do? Follows the idea presented in the article about Kuhn's theories that scientists do not actually follow the method they lay out in research papers. Additionally, this slideshow asks how much the evidence scientists present is tested and verified.
  •  
    Satirical view of how scientists work; we think scientists use through methods all the time, but can we really know they do? Follows the idea presented in the article about Kuhn's theories that scientists do not actually follow the method they lay out in research papers. Additionally, this slideshow asks how much the evidence scientists present is tested and verified.
erickjhonkdkk102

Buy Verified CashApp Accounts - BTC Enable Aged CashApp by cashapp00 - Issuu - 0 views

  •  
    Are you looking to buy verified cashapp accounts with BTC enable? We are able to provide you btc enable cashapp account at a reasonable price
erickjhonkdkk73

Buy Verified PayPal Account - Old/New USA, UK, CA Countries by buypaypal1 - Issuu - 0 views

  •  
    Are you looking to buy verified cashapp accounts with BTC enable? We are able to provide you btc enable cashapp account at a reasonable price
Javier E

The Joy of Psyching Myself Out­ - The New York Times - 0 views

  • that neat separation is not just unwarranted; it’s destructive
  • Although it’s often presented as a dichotomy (the apparent subjectivity of the writer versus the seeming objectivity of the psychologist), it need not be.
  • IS it possible to think scientifically and creatively at once? Can you be both a psychologist and a writer?
  • ...10 more annotations...
  • “A writer must be as objective as a chemist,” Anton Chekhov wrote in 1887. “He must abandon the subjective line; he must know that dung heaps play a very reasonable part in a landscape.”
  • At the turn of the century, psychology was a field quite unlike what it is now. The theoretical musings of William James were the norm (a wry commenter once noted that William James was the writer, and his brother Henry, the psychologist)
  • Freud was a breed of psychologist that hardly exists anymore: someone who saw the world as both writer and psychologist, and for whom there was no conflict between the two. That boundary melding allowed him to posit the existence of cognitive mechanisms that wouldn’t be empirically proved for decades,
  • Freud got it brilliantly right and brilliantly wrong. The rightness is as good a justification as any of the benefits, the necessity even, of knowing how to look through the eyes of a writer. The wrongness is part of the reason that the distinction between writing and experimental psychology has grown far more rigid than it was a century ago.
  • the signs people associate with liars often have little empirical evidence to support them. Therein lies the psychologist’s distinct role and her necessity. As a writer, you look in order to describe, but you remain free to use that description however you see fit. As a psychologist, you look to describe, yes, but also to verify.
  • Without verification, we can’t always trust what we see — or rather, what we think we see.
  • The desire for the world to be what it ought to be and not what it is permeates experimental psychology as much as writing, though. There’s experimental bias and the problem known in the field as “demand characteristics” — when researchers end up finding what they want to find by cuing participants to act a certain way.
  • IN 1932, when he was in his 70s, Freud gave a series of lectures on psychoanalysis. In his final talk, “A Philosophy of Life,” he focused on clarifying an important caveat to his research: His followers should not be confused by the seemingly internal, and thus possibly subjective, nature of his work. “There is no other source of knowledge of the universe but the intellectual manipulation of carefully verified observations,” he said.
  • That is what both the psychologist and the writer should strive for: a self-knowledge that allows you to look in order to discover, without agenda, without preconception, without knowing or caring if what you’re seeing is wrong or right in your scheme of the world. It’s harder than it sounds. For one thing, you have to possess the self-knowledge that will allow you to admit when you’re wrong.
  • Even with the best intentions, objectivity can prove a difficult companion. I left psychology behind because I found its structural demands overly hampering. I couldn’t just pursue interesting lines of inquiry; I had to devise a set of experiments, see how feasible they were, both technically and financially, consider how they would reflect on my career. That meant that most new inquiries never happened — in a sense, it meant that objectivity was more an ideal than a reality. Each study was selected for a reason other than intrinsic interest.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Computer Algorithms Rely Increasingly on Human Helpers - NYTimes.com - 0 views

  • Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.
  • And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved. People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.
  • Even at Google, where algorithms and engineers reign supreme in the company’s business and culture, the human contribution to search results is increasing. Google uses human helpers in two ways. Several months ago, it began presenting summaries of information on the right side of a search page when a user typed in the name of a well-known person or place, like “Barack Obama” or “New York City.” These summaries draw from databases of knowledge like Wikipedia, the C.I.A. World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.
  • ...3 more annotations...
  • When Google’s algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to Web pages. “There has been a shift in our thinking,” said Scott Huffman, an engineering director in charge of search quality at Google. “A part of our resources are now more human curated.”
  • “Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement,” Mr. Huffman said.
  • Ben Taylor, 25, is a product manager at FindTheBest, a fast-growing start-up in Santa Barbara, Calif. The company calls itself a “comparison engine” for finding and comparing more than 100 topics and products, from universities to nursing homes, smartphones to dog breeds. Its Web site went up in 2010, and the company now has 60 full-time employees. Mr. Taylor helps design and edit the site’s education pages. He is not an engineer, but an English major who has become a self-taught expert in the arcane data found in Education Department studies and elsewhere. His research methods include talking to and e-mailing educators. He is an information sleuth.
proudsa

We Asked an Expert if the World Needs to Worry About North Korea's H-Bomb Claims | VICE... - 0 views

  • World Needs to Worry About North Korea's H-Bomb Claims
    • proudsa
       
      TOK related --> lesson on statistics and probability and how we should react to those facts
  • any truth
  • claimed to have successfully tested a hydrogen bomb
  • ...1 more annotation...
  • said it was capable of miniaturizing its nuclear weapons and attaching them to rockets, meaning it would feasibly be able to blast them at all its imperialist pig-dog enemies in the USA—but obviously, these claims haven't yet been verified.
sissij

Language family - Wikipedia - 0 views

  • A language family is a group of languages related through descent from a common ancestor, called the proto-language of that family. The term 'family' reflects the tree model of language origination in historical linguistics, which makes use of a metaphor comparing languages to people in a biological family tree, or in a subsequent modification, to species in a phylogenetic tree of evolutionary taxonomy.
  • the Celtic, Germanic, Slavic, Romance, and Indo-Iranian language families are branches of a larger Indo-European language family. There is a remarkably similar pattern shown by the linguistic tree and the genetic tree of human ancestry[3] that was verified statistically.
  • A speech variety may also be considered either a language or a dialect depending on social or political considerations. Thus, different sources give sometimes wildly different accounts of the number of languages within a family. Classifications of the Japonic family, for example, range from one language (a language isolate) to nearly twenty.
  • ...2 more annotations...
  • A language isolated in its own branch within a family, such as Armenian within Indo-European, is often also called an isolate, but the meaning of isolate in such cases is usually clarified. For instance, Armenian may be referred to as an "Indo-European isolate". By contrast, so far as is known, the Basque language is an absolute isolate: it has not been shown to be related to any other language despite numerous attempts.
  • The common ancestor of a language family is seldom known directly since most languages have a relatively short recorded history. However, it is possible to recover many features of a proto-language by applying the comparative method, a reconstructive procedure worked out by 19th century linguist August Schleicher.
  •  
    I found this metaphor very accurate because I think languages certainly have some intimate relationship like family members. Languages are not all very different from one another and isolated. Although people speaking different language may not understand one another, their languages are still connected. I think this article can show that language in some ways are connected like bridges instead of walls. --Sissi (11/26/2016)
Javier E

[Six Questions] | Astra Taylor on The People's Platform: Taking Back Power and Culture ... - 1 views

  • Astra Taylor, a cultural critic and the director of the documentaries Zizek! and Examined Life, challenges the notion that the Internet has brought us into an age of cultural democracy. While some have hailed the medium as a platform for diverse voices and the free exchange of information and ideas, Taylor shows that these assumptions are suspect at best. Instead, she argues, the new cultural order looks much like the old: big voices overshadow small ones, content is sensationalist and powered by advertisements, quality work is underfunded, and corporate giants like Google and Facebook rule. The Internet does offer promising tools, Taylor writes, but a cultural democracy will be born only if we work collaboratively to develop the potential of this powerful resource
  • Most people don’t realize how little information can be conveyed in a feature film. The transcripts of both of my movies are probably equivalent in length to a Harper’s cover story.
  • why should Amazon, Apple, Facebook, and Google get a free pass? Why should we expect them to behave any differently over the long term? The tradition of progressive media criticism that came out of the Frankfurt School, not to mention the basic concept of political economy (looking at the way business interests shape the cultural landscape), was nowhere to be seen, and that worried me. It’s not like political economy became irrelevant the second the Internet was invented.
  • ...15 more annotations...
  • How do we reconcile our enjoyment of social media even as we understand that the corporations who control them aren’t always acting in our best interests?
  • hat was because the underlying economic conditions hadn’t been changed or “disrupted,” to use a favorite Silicon Valley phrase. Google has to serve its shareholders, just like NBCUniversal does. As a result, many of the unappealing aspects of the legacy-media model have simply carried over into a digital age — namely, commercialism, consolidation, and centralization. In fact, the new system is even more dependent on advertising dollars than the one that preceded it, and digital advertising is far more invasive and ubiquitous
  • the popular narrative — new communications technologies would topple the establishment and empower regular people — didn’t accurately capture reality. Something more complex and predictable was happening. The old-media dinosaurs weren’t dying out, but were adapting to the online environment; meanwhile the new tech titans were coming increasingly to resemble their predecessors
  • I use lots of products that are created by companies whose business practices I object to and that don’t act in my best interests, or the best interests of workers or the environment — we all do, since that’s part of living under capitalism. That said, I refuse to invest so much in any platform that I can’t quit without remorse
  • these services aren’t free even if we don’t pay money for them; we pay with our personal data, with our privacy. This feeds into the larger surveillance debate, since government snooping piggybacks on corporate data collection. As I argue in the book, there are also negative cultural consequences (e.g., when advertisers are paying the tab we get more of the kind of culture marketers like to associate themselves with and less of the stuff they don’t) and worrying social costs. For example, the White House and the Federal Trade Commission have both recently warned that the era of “big data” opens new avenues of discrimination and may erode hard-won consumer protections.
  • I’m resistant to the tendency to place this responsibility solely on the shoulders of users. Gadgets and platforms are designed to be addictive, with every element from color schemes to headlines carefully tested to maximize clickability and engagement. The recent news that Facebook tweaked its algorithms for a week in 2012, showing hundreds of thousands of users only “happy” or “sad” posts in order to study emotional contagion — in other words, to manipulate people’s mental states — is further evidence that these platforms are not neutral. In the end, Facebook wants us to feel the emotion of wanting to visit Facebook frequently
  • social inequalities that exist in the real world remain meaningful online. What are the particular dangers of discrimination on the Internet?
  • That it’s invisible or at least harder to track and prove. We haven’t figured out how to deal with the unique ways prejudice plays out over digital channels, and that’s partly because some folks can’t accept the fact that discrimination persists online. (After all, there is no sign on the door that reads Minorities Not Allowed.)
  • just because the Internet is open doesn’t mean it’s equal; offline hierarchies carry over to the online world and are even amplified there. For the past year or so, there has been a lively discussion taking place about the disproportionate and often outrageous sexual harassment women face simply for entering virtual space and asserting themselves there — research verifies that female Internet users are dramatically more likely to be threatened or stalked than their male counterparts — and yet there is very little agreement about what, if anything, can be done to address the problem.
  • What steps can we take to encourage better representation of independent and non-commercial media? We need to fund it, first and foremost. As individuals this means paying for the stuff we believe in and want to see thrive. But I don’t think enlightened consumption can get us where we need to go on its own. I’m skeptical of the idea that we can shop our way to a better world. The dominance of commercial media is a social and political problem that demands a collective solution, so I make an argument for state funding and propose a reconceptualization of public media. More generally, I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a “library” or Twitter a “town square” — or even calling social media “social” — but real public options are off the table, at least in the United States. We hand the digital commons over to private corporations at our peril.
  • 6. You advocate for greater government regulation of the Internet. Why is this important?
  • I’m for regulating specific things, like Internet access, which is what the fight for net neutrality is ultimately about. We also need stronger privacy protections and restrictions on data gathering, retention, and use, which won’t happen without a fight.
  • I challenge the techno-libertarian insistence that the government has no productive role to play and that it needs to keep its hands off the Internet for fear that it will be “broken.” The Internet and personal computing as we know them wouldn’t exist without state investment and innovation, so let’s be real.
  • there’s a pervasive and ill-advised faith that technology will promote competition if left to its own devices (“competition is a click away,” tech executives like to say), but that’s not true for a variety of reasons. The paradox of our current media landscape is this: our devices and consumption patterns are ever more personalized, yet we’re simultaneously connected to this immense, opaque, centralized infrastructure. We’re all dependent on a handful of firms that are effectively monopolies — from Time Warner and Comcast on up to Google and Facebook — and we’re seeing increased vertical integration, with companies acting as both distributors and creators of content. Amazon aspires to be the bookstore, the bookshelf, and the book. Google isn’t just a search engine, a popular browser, and an operating system; it also invests in original content
  • So it’s not that the Internet needs to be regulated but that these big tech corporations need to be subject to governmental oversight. After all, they are reaching farther and farther into our intimate lives. They’re watching us. Someone should be watching them.
Javier E

"Wikipedia Is Not Truth" - The Dish | By Andrew Sullivan - The Daily Beast - 0 views

  • entriesOnPage.push("6a00d83451c45669e20168e7872016970c"); facebookButtons['6a00d83451c45669e20168e7872016970c'] = ''; twitterButtons['6a00d83451c45669e20168e7872016970c'] = ''; email permalink 20 Feb 2012 12:30 PM "Wikipedia Is Not Truth" Timothy Messer-Kruse tried to update the Wiki page on the Haymarket riot of 1886 to correct a long-standing inaccurate claim. Even though he's written two books and numerous articles on the subject, his changes were instantly rejected: I had cited the documents that proved my point, including verbatim testimony from the trial published online by the Library of Congress. I also noted one of my own peer-reviewed articles. One of the people who had assumed the role of keeper of this bit of history for Wikipedia quoted the Web site's "undue weight" policy, which states that "articles should not give minority views as much or as detailed a description as more popular views."
  • "Explain to me, then, how a 'minority' source with facts on its side would ever appear against a wrong 'majority' one?" I asked the Wiki-gatekeeper. ...  Another editor cheerfully tutored me in what this means: "Wikipedia is not 'truth,' Wikipedia is 'verifiability' of reliable sources. Hence, if most secondary sources which are taken as reliable happen to repeat a flawed account or description of something, Wikipedia will echo that."
Javier E

Big Data Troves Stay Forbidden to Social Scientists - NYTimes.com - 0 views

  • When scientists publish their research, they also make the underlying data available so the results can be verified by other scientists.
  • lately social scientists have come up against an exception that is, true to its name, huge. It is “big data,” the vast sets of information gathered by researchers at companies like Facebook, Google and Microsoft from patterns of cellphone calls, text messages and Internet clicks by millions of users around the world. Companies often refuse to make such information public, sometimes for competitive reasons and sometimes to protect customers’ privacy. But to many scientists, the practice is an invitation to bad science, secrecy and even potential fraud.
  • corporate control of data could give preferential access to an elite group of scientists at the largest corporations.
  • ...2 more annotations...
  • “In the Internet era,” said Andreas Weigend, a physicist and former chief scientist at Amazon, “research has moved out of the universities to the Googles, Amazons and Facebooks of the world.”
  • A recent review found that 44 of 50 leading scientific journals instructed their authors on sharing data but that fewer than 30 percent of the papers they published fully adhered to the instructions. A 2008 review of sharing requirements for genetics data found that 40 of 70 journals surveyed had policies, and that 17 of those were “weak.”
Javier E

The Dangers of Pseudoscience - NYTimes.com - 0 views

  • the “demarcation problem,” the issue of what separates good science from bad science and pseudoscience (and everything in between). The problem is relevant for at least three reasons.
  • The first is philosophical: Demarcation is crucial to our pursuit of knowledge; its issues go to the core of debates on epistemology and of the nature of truth and discovery.
  • The second reason is civic: our society spends billions of tax dollars on scientific research, so it is important that we also have a good grasp of what constitutes money well spent in this regard.
  • ...18 more annotations...
  • Third, as an ethical matter, pseudoscience is not — contrary to popular belief — merely a harmless pastime of the gullible; it often threatens people’s welfare,
  • It is precisely in the area of medical treatments that the science-pseudoscience divide is most critical, and where the role of philosophers in clarifying things may be most relevant.
  • some traditional Chinese remedies (like drinking fresh turtle blood to alleviate cold symptoms) may in fact work
  • There is no question that some folk remedies do work. The active ingredient of aspirin, for example, is derived from willow bark, which had been known to have beneficial effects since the time of Hippocrates. There is also no mystery about how this happens: people have more or less randomly tried solutions to their health problems for millennia, sometimes stumbling upon something useful
  • What makes the use of aspirin “scientific,” however, is that we have validated its effectiveness through properly controlled trials, isolated the active ingredient, and understood the biochemical pathways through which it has its effects
  • In terms of empirical results, there are strong indications that acupuncture is effective for reducing chronic pain and nausea, but sham therapy, where needles are applied at random places, or are not even pierced through the skin, turn out to be equally effective (see for instance this recent study on the effect of acupuncture on post-chemotherapy chronic fatigue), thus seriously undermining talk of meridians and Qi lines
  • Asma at one point compares the current inaccessibility of Qi energy to the previous (until this year) inaccessibility of the famous Higgs boson,
  • But the analogy does not hold. The existence of the Higgs had been predicted on the basis of a very successful physical theory known as the Standard Model. This theory is not only exceedingly mathematically sophisticated, but it has been verified experimentally over and over again. The notion of Qi, again, is not really a theory in any meaningful sense of the word. It is just an evocative word to label a mysterious force
  • Philosophers of science have long recognized that there is nothing wrong with positing unobservable entities per se, it’s a question of what work such entities actually do within a given theoretical-empirical framework. Qi and meridians don’t seem to do any, and that doesn’t seem to bother supporters and practitioners of Chinese medicine. But it ought to.
  • what’s the harm in believing in Qi and related notions, if in fact the proposed remedies seem to help?
  • we can incorporate whatever serendipitous discoveries from folk medicine into modern scientific practice, as in the case of the willow bark turned aspirin. In this sense, there is no such thing as “alternative” medicine, there’s only stuff that works and stuff that doesn’t.
  • Second, if we are positing Qi and similar concepts, we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them.
  • pseudo-medical treatments often do not work, or are even positively harmful. If you take folk herbal “remedies,” for instance, while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.
  • Indulging in a bit of pseudoscience in some instances may be relatively innocuous, but the problem is that doing so lowers your defenses against more dangerous delusions that are based on similar confusions and fallacies. For instance, you may expose yourself and your loved ones to harm because your pseudoscientific proclivities lead you to accept notions that have been scientifically disproved, like the increasingly (and worryingly) popular idea that vaccines cause autism.
  • Philosophers nowadays recognize that there is no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other. For example, alchemy was a (somewhat) legitimate science in the times of Newton and Boyle, but it is now firmly pseudoscientific (movements in the opposite direction, from full-blown pseudoscience to genuine science, are notably rare).
  • The verdict by philosopher Larry Laudan, echoed by Asma, that the demarcation problem is dead and buried, is not shared by most contemporary philosophers who have studied the subject.
  • the criterion of falsifiability, for example, is still a useful benchmark for distinguishing science and pseudoscience, as a first approximation. Asma’s own counterexample inadvertently shows this: the “cleverness” of astrologers in cherry-picking what counts as a confirmation of their theory, is hardly a problem for the criterion of falsifiability, but rather a nice illustration of Popper’s basic insight: the bad habit of creative fudging and finagling with empirical data ultimately makes a theory impervious to refutation. And all pseudoscientists do it, from parapsychologists to creationists and 9/11 Truthers.
  • The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning. To try a modicum of turtle blood here and a little aspirin there is not the hallmark of wisdom and even-mindedness. It is a dangerous gateway to superstition and irrationality.
Javier E

Why Our Memory Fails Us - NYTimes.com - 1 views

  • how we all usually respond when our memory is challenged. We have an abstract understanding that people can remember the same event differently.
  • But when our own memories are challenged, we may neglect all this and instead respond emotionally, acting as though we must be right and everyone else must be wrong.
  • It’s no accident that Oprah Winfrey’s latest best seller is called “What I Know For Sure,” rather than “Some Things That Might Be True.”
  • ...12 more annotations...
  • Our lack of appreciation for the fallibility of our own memories can lead to much bigger problems than a misattributed quote.
  • Memory failures that resemble Dr. Tyson’s mash-up of distinct experiences have led to false convictions, and even death sentences. Whose memories we believe and whose we disbelieve influence how we interpret controversial public events, as demonstrated most recently by the events in Ferguson, Mo.
  • , for false memories, higher confidence was associated with lower accuracy.
  • In general, if you have seen something before, your confidence that you have seen it and your accuracy in recalling it are linked: The more confident you are in your memory, the more likely you are to be right
  • . This fall the panel (which one of us, Daniel Simons, served on) released a comprehensive report that recommended procedures to minimize the chances of false memory and mistaken identification, including videotaping police lineups and improving jury instructions.
  • When we recall our own memories, we are not extracting a perfect record of our experiences and playing it back verbatim. Most people believe that memory works this way, but it doesn’t. Instead, we are effectively whispering a message from our past to our present, reconstructing it on the fly each time. We get a lot of details right, but when our memories change, we only “hear” the most recent version of the message, and we may assume that what we believe now is what we always believed.
  • Studies find that even our “flashbulb memories” of emotionally charged events can be distorted and inaccurate, but we cling to them with the greatest of confidence.
  • With each retrieval our memories can morph, and so can our confidence in them. This is why the National Academy of Sciences report strongly advised courts to rely on initial statements rather than courtroom proclamations:
  • In fact, the mere act of describing a person’s appearance can change how likely you are to pick him out of a lineup later. This finding, known as “verbal overshadowing,” had been controversial, but was recently verified in a collective effort by more than 30 separate research labs.
  • The science of memory distortion has become rigorous and reliable enough to help guide public policy. It should also guide our personal attitudes and actions.
  • It is just as misguided to conclude that someone who misremembers must be lying as it is to defend a false memory in the face of contradictory evidence. We should be more understanding of mistakes by others, and credit them when they admit they were wrong. We are all fabulists, and we must all get used to it.
  • Subliminal is a good book on this subject, as is Thinking, Fast and Slow. It is not merely that our memory is a game of telephone in which we garble the memory as we retrieve and re-store it. It appears to be a very useful part of existence, which has stayed with us or gotten stronger with evolution. There seems to be utility in forgetting, misremembering, and having memories coalesce with those of others in our social groups.
Emilio Ergueta

Does God Exist? | Issue 99 | Philosophy Now - 1 views

  • On April 8, 1966, Time magazine carried a lead story for which the cover was completely black except for three words emblazoned in bright, red letters against the dark background: “IS GOD DEAD?” The story described the so-called ‘Death of God’ movement then current in American theology. But, to paraphrase Mark Twain, it seemed that the news of God’s demise was “greatly exaggerated.” For at the same time that theologians were writing God’s obituary, a new generation of young philosophers was re-discovering His vitality.
  • Back in the 1940s and ’50s it was widely believed among philosophers that any talk about God is meaningless, since it is not verifiable by the five senses.
  • Its downfall meant a resurgence of metaphysics, along with other traditional problems of philosophy which Verificationism had suppressed. Accompanying this resurgence came something altogether unanticipated: a renaissance of Christian philosophy.
  • ...3 more annotations...
  • The turning point probably came in 1967 with the publication of Alvin Plantinga’s God and Other Minds, which applied the tools of analytic philosophy to questions in the philosophy of religion with an unprecedented rigor and creativity
  • The renaissance of Christian philosophy has been accompanied by a resurgence of interest in natural theology – that branch of theology which seeks to prove God’s existence without appeal to the resources of authoritative divine revelation – for instance, through philosophical argument.
  • the New Atheism is, in fact, a pop-cultural phenomenon lacking in intellectual muscle and blissfully ignorant of the revolution that has taken place in Anglo-American philosophy. It tends to reflect the scientism of a bygone generation, rather than the contemporary intellectual scene.
Javier E

A Reply To Jonathan Chait On Stimulus | The New Republic - 0 views

  • It is certainly true that a large majority of professional economists accept the view that “increasing spending or reducing taxes temporarily increases economic growth”—but that is very far from claiming that disputing it is largely a political campaign.
  • in a genuinely scientific field which has accepted a predictive rule as valid to the point that there is a true consensus—such that the only reason for refusal to accept it is crankery or, in Chait’s terms, “politics”—you don’t usually see: several full professors at the top two departments in the subject, when speaking directly in their area of research expertise, challenge it; 10 percent of all practitioners in the field refuse to accept it; and the two leading global general circulation publications in field running op-eds questioning it.
  • A great many leading economists may accept the proposition that enough stimulus spending will probably cause at least some increase in output for a short period of time in some circumstances, yet are still uncomfortable with the kind of stimulus spending strategy that is the actual subject of current political debate. In 2009, James Buchanan (1986 Nobel Laureate in Economics), Edward Prescott (2004 Nobel Laureate in Economics), and Vernon Smith (2002 Nobel Laureate in Economics) promulgated this statement: “Notwithstanding reports that all economists are now Keynesians and that we all support a big increase in the burden of government, we do not believe that more government spending is a way to improve economic performance.”
  • ...5 more annotations...
  • It is nerdy-sounding, but I believe critical to this discussion, to distinguish between measurement and knowledge. I made a very strong claim about measurement, and a very specific claim about knowledge. I claim that we cannot usefully measure the effect of the stimulus program launched in 2009 at all.
  • All potentially useful predictions made about the output impact of the stimulus program are non-falsifiable. Failure of predictions can be simply justified by this sort of ad hoc explanation after the fact.
  • This argument will always degenerate back into endlessly dueling regressions, because there is no ability to adjudicate among them via experiment.
  • It simply means that we have no scientific knowledge about this topic. Macroeconomic assertions about the effect of a proposed stimulus policy are not valueless, but despite their complex mathematical justifications, do not have standing as knowledge that can trump common sense, historical reasoning, and so on in the same way that a predictive rule that has been verified through experimental testing can.
  • When using stimulus to ameliorate the economic crisis, we are like primitive tribesmen using herbs to treat an infection, and we should not allow ourselves to imagine that we are using antibiotics that have been proven through clinical trials. This should not imply merely a different feeling about the same actions, but should rationally lead us to greater circumspection.
  •  
    An incisive analysis of knowledge claims in economics, and Keynesian approaches.
jongardner04

Your Scientific Reasoning Is More Flawed Than You Think - Scientific American - 0 views

  • And in a world where many high-stakes issues fundamentally boil down to science, this is clearly a problem. 
  • Naturally, the solution to the problem lies in good schooling — emptying minds of their youthful hunches and intuitions about how the world works, and repopulating them with sound scientific principles that have been repeatedly tested and verified. Wipe out the old operating system, and install the new
  • Although science as a field discards theories that are wrong or lacking, Shtulman and Valcarcel’s work suggests that individuals —even scientifically literate ones — tend to hang on to their early, unschooled, and often wrong theories about the natural world.
  • ...2 more annotations...
  • Taken together, these findings suggest that we may be innately predisposed to have certain theories about the natural world that are resilient to being drastically replaced or restructured.
  • hile some theories of learning consider the unschooled mind to be a ‘bundle of misconceptions’ in need of replacement, perhaps replacement is an unattainable goal. Or, if it is attainable, we may need to rethink how science is taught.
  •  
    Discusses scientific reasoning, which relates with the things we are learning about with scientific method. 
1 - 20 of 39 Next ›
Showing 20 items per page