Skip to main content

Home/ TOK Friends/ Group items tagged information

Rss Feed Group items tagged

Javier E

Why Is It So Hard to Be Rational? | The New Yorker - 0 views

  • an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “Rationality: What It Is, Why It Seems Scarce, Why It Matters” (Viking) and Julia Galef’s “The Scout Mindset: Why Some People See Things Clearly and Others Don’t” (Portfolio).
  • When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones.
  • And yet rationality has sharp edges that make it hard to put at the center of one’s life
  • ...43 more annotations...
  • You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“RATIONAL, adj.: Devoid of all delusions save those of observation, experience and reflection,”
  • You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus.
  • Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “The Revolt of the Public and the Crisis of Authority in the New Millennium,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect.
  • modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.”
  • Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done.
  • Rationality is one of humanity’s superpowers. How do we keep from misusing it?
  • Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context.
  • Bayesian reasoning implies a few “best practices.”
  • Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat
  • We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust.
  • But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities.
  • Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful.
  • the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size.
  • In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization.
  • Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive.
  • You can know what’s right but still struggle to do it.
  • Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds.
  • For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. 
  • I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart.
  • between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act.
  • Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved.
  • in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together.
  • The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula.
  • The real challenge isn’t being right but knowing how wrong you might be.By Joshua RothmanAugust 16, 2021
  • Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?).
  • Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical?
  • For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest.
  • In “The Rationality Quotient: Toward a Test of Rational Thinking” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently
  • Galef, who hosts a podcast called “Rationally Speaking” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.)
  • Galef tends to see rationality as a method for acquiring more accurate views.
  • Pinker, a cognitive and evolutionary psychologist, sees it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want.
  • Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Fleming, in “Know Thyself: The Science of Self-Awareness” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.”
  • A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts.
  • In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before
  • The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.”
  • metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational
  • There are many calibration methods
  • nowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge.
  • Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps.
  • The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes
  • So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.
  • the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preëxisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.
  • Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information.
Javier E

Book Review - The Information - By James Gleick - NYTimes.com - 0 views

  • Information, he argues, is more than just the contents of our overflowing libraries and Web servers. It is “the blood and the fuel, the vital principle” of the world. Human consciousness, society, life on earth, the cosmos — it’s bits all the way down.
  • Shannon’s paper, published the same year as the invention of the transistor, instantaneously created the field of information theory, with broad applications in engineering and computer science.
  • information theory wound up reshaping fields from economics to philosophy, and heralded a dramatic rethinking of biology and physics.
  • ...1 more annotation...
  • molecular biologists were soon speaking of information, not to mention codes, libraries, alphabets and transcription, without any sense of metaphor. In Gleick’s words, “Genes themselves are made of bits.” At the same time, physicists exploring what Einstein had called the “spooky” paradoxes of quantum mechanics began to see information as the substance from which everything else in the universe derives. As the physicist John Archibald Wheeler put it in a paper title, “It From Bit.”
Javier E

The Backfire Effect « You Are Not So Smart - 0 views

  • corrections tended to increase the strength of the participants’ misconceptions if those corrections contradicted their ideologies. People on opposing sides of the political spectrum read the same articles and then the same corrections, and when new evidence was interpreted as threatening to their beliefs, they doubled down. The corrections backfired.
  • Once something is added to your collection of beliefs, you protect it from harm. You do it instinctively and unconsciously when confronted with attitude-inconsistent information. Just as confirmation bias shields you when you actively seek information, the backfire effect defends you when the information seeks you, when it blindsides you. Coming or going, you stick to your beliefs instead of questioning them. When someone tries to correct you, tries to dilute your misconceptions, it backfires and strengthens them instead. Over time, the backfire effect helps make you less skeptical of those things which allow you to continue seeing your beliefs and attitudes as true and proper.
  • Psychologists call stories like these narrative scripts, stories that tell you what you want to hear, stories which confirm your beliefs and give you permission to continue feeling as you already do. If believing in welfare queens protects your ideology, you accept it and move on.
  • ...8 more annotations...
  • Contradictory evidence strengthens the position of the believer. It is seen as part of the conspiracy, and missing evidence is dismissed as part of the coverup.
  • Most online battles follow a similar pattern, each side launching attacks and pulling evidence from deep inside the web to back up their positions until, out of frustration, one party resorts to an all-out ad hominem nuclear strike
  • you can never win an argument online. When you start to pull out facts and figures, hyperlinks and quotes, you are actually making the opponent feel as though they are even more sure of their position than before you started the debate. As they match your fervor, the same thing happens in your skull. The backfire effect pushes both of you deeper into your original beliefs.
  • you spend much more time considering information you disagree with than you do information you accept. Information which lines up with what you already believe passes through the mind like a vapor, but when you come across something which threatens your beliefs, something which conflicts with your preconceived notions of how the world works, you seize up and take notice. Some psychologists speculate there is an evolutionary explanation. Your ancestors paid more attention and spent more time thinking about negative stimuli than positive because bad things required a response
  • when your beliefs are challenged, you pore over the data, picking it apart, searching for weakness. The cognitive dissonance locks up the gears of your mind until you deal with it. In the process you form more neural connections, build new memories and put out effort – once you finally move on, your original convictions are stronger than ever.
  • The backfire effect is constantly shaping your beliefs and memory, keeping you consistently leaning one way or the other through a process psychologists call biased assimilation.
  • They then separated subjects into two groups; one group said they believed homosexuality was a mental illness and one did not. Each group then read the fake studies full of pretend facts and figures suggesting their worldview was wrong. On either side of the issue, after reading studies which did not support their beliefs, most people didn’t report an epiphany, a realization they’ve been wrong all these years. Instead, they said the issue was something science couldn’t understand. When asked about other topics later on, like spanking or astrology, these same people said they no longer trusted research to determine the truth. Rather than shed their belief and face facts, they rejected science altogether.
  • As social media and advertising progresses, confirmation bias and the backfire effect will become more and more difficult to overcome. You will have more opportunities to pick and choose the kind of information which gets into your head along with the kinds of outlets you trust to give you that information. In addition, advertisers will continue to adapt, not only generating ads based on what they know about you, but creating advertising strategies on the fly based on what has and has not worked on you so far. The media of the future may be delivered based not only on your preferences, but on how you vote, where you grew up, your mood, the time of day or year – every element of you which can be quantified. In a world where everything comes to you on demand, your beliefs may never be challenged.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
krystalxu

How Do We Memorize Things? - 0 views

  • There are 3 steps to remembering things:1) Encode – get the information into our brains 2) Store – retain the information 3) Retrieve – get the information out
  • 1) Elaborative encodingWe actively relate new information to the information that is already in our memories.
  • 2) Visual encodingWe store information by converting the information into images.
  • ...1 more annotation...
  • 3) Organizational encodingWe categorize information according to relationships.
Javier E

Law professor Kim Wehle's latest book is 'How To Think Like a Lawyer - and Why' : NPR - 0 views

  • a five-step process she calls the BICAT method - BICAT.
  • KIM WEHLE: B is to break a problem down into smaller pieces
  • I is to identify our values. A lot of people think lawyers are really about winning all the time. But the law is based on a value system. And I suggest that people be very deliberate about what matters to them with whatever decision there is
  • ...19 more annotations...
  • C is to collect a lot of information. Thirty years ago, the challenge was finding information in a card catalog at the library. Now it's, how do we separate the good stuff from the bad stuff?
  • A is to analyze both sides. Lawyers have to turn the coin over and exhaust counterarguments or we'll lose in court.
  • So lawyers are trained to look for the gray areas, to look for the questions are not the answers. And if we kind of orient our thinking that way, I think we're less likely to shut down competing points of view.
  • My argument in the book is, we can feel good about a decision even if we don't get everything that we want. We have to make compromises.
  • I tell my students, you'll get through the bar. The key is to look for questions and not answers. If you could answer every legal question with a Wikipedia search, there would be no reason to hire lawyers.
  • Lawyers are hired because there are arguments on both sides, you know? Every Supreme Court decision that is split 6-3, 5-4, that means there were really strong arguments on both sides.
  • T is, tolerate the fact that you won't get everything you want every time
  • So we have to be very careful about the source of what you're getting, OK? Is this source neutral? Is this source really care about facts and not so much about an agenda?
  • Step 3, the collecting information piece. I think it's a new skill for all of us that we are overloaded with information into our phones. We have algorithms that somebody else developed that tailor the information that comes into our phones based on what the computer thinks we already believe
  • No. 2 - this is the beauty of social media and the internet - you can pull original sources. We can click on the indictment. Click on the new bill that has been proposed in the United States Congress.
  • then the book explains ways that you can then sort through that information for yourself. Skills are empowering.
  • Maybe as a replacement for sort of being empowered by being part of a team - a red team versus a blue team - that's been corrosive, I think, in American politics and American society. But arming ourselves with good facts, that leads to self-determination.
  • MARTINEZ: Now, you've written two other books - "How To Read The Constitution" and "What You Need To Know About Voting" - along with this one, "How To Think Like A Lawyer - And Why.
  • It kind of makes me think, Kim, that you feel that Americans might be lacking a basic level of civics education or understanding. So what is lacking when it comes to teaching civics or in civics discourse today?
  • studies have shown that around a third of Americans can't name the three branches of government. But if we don't understand our government, we don't know how to hold our government accountable
  • Democracies can't stay open if we've got elected leaders that are caring more about entrenching their own power and misinformation than actually preserving democracy by the people. I think that's No. 1.
  • No. 2 has to do with a value system. We talk about American values - reward for hard work, integrity, honesty. The same value system should apply to who we hire for government positions. And I think Americans have lost that.
  • in my own life, I'm very careful about who gets to be part of the inner circle because I have a strong value system. Bring that same sense to bear at the voting booth. Don't vote for red versus blue. Vote for people that live your value system
  • just like the Ukrainians are fighting for their children's democracy, we need to do that as well. And we do that through informing ourselves with good information, tolerating competing points of view and voting - voting, voting, voting - to hold elected leaders accountable if they cross boundaries that matter to us in our own lives.
demetriar

Teenagers Seek Health Information Online, but Don't Always Trust It - NYTimes.com - 0 views

  • Four out of five teenagers turn to the Internet for health information, but they don’t always put much stock in what they find, according to a national survey released on Tuesday.
  • The source they really trust with questions about health? Surprise: their parents.
  • One in three teenagers said they changed their behavior because of what they had learned from online sites or apps.
  • ...4 more annotations...
  • Still, most teenagers said they read the first few sites that popped up in an online search, rather than delving further. The information often did not directly address their concerns or was densely written, they said.
  • When asked where they got “a lot” of health information, 55 percent of the teenagers cited their parents. School health classes and medical providers also ranked higher than the Internet as preferred sources.
  • teenagers do not search online the way adults do, which contributes to the uneven quality of the information they receive.
  • “We should be able to provide kids with better, independent online health resources.”
katedriscoll

Frontiers | A Neural Network Framework for Cognitive Bias | Psychology - 0 views

  • Human decision-making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for
  • the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a neural network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. To substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility, (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions, and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena.
  • The cognitive-psychological (or heuristics and biases) perspective (Evans, 2008; Kahneman and Klein, 2009) attributes cognitive biases to limitations in the available data and in the human information processing capacity (Simon, 1955; Broadbent, 1958; Kahneman, 1973, 2003; Norman and Bobrow, 1975)
mshilling1

Why we believe fake news - BBC Future - 0 views

  • Less commonplace is the acknowledgement that human judgements also rely upon secondary information that doesn’t come from any external source – and that offers one of the most powerful tools we possess for dealing with the deluge itself. This source is social information. Or, in other words: what we think other people are thinking.
  • Your senses inform you that other people are moving frantically. But it’s the social interpretation you put on this information that tells you what you most need to know: these people believe that something bad is happening, and this means you should probably be trying to escape too.
  • Assuming they have no first-hand knowledge of the claim, it’s theoretically possible for them to look it up elsewhere – a process of laborious verification that involves trawling through countless claims and counter-claims. They also, however, possess a far simpler method of evaluation, which is to ask what other people seem to think.
  • ...4 more annotations...
  • As Hendricks and Hansen put it, “when you don’t possess sufficient information to solve a given problem, or if you just don’t want to or have the time for processing it, then it can be rational to imitate others by way of social proof”
  • When we either know very little about something, or the information surrounding it is overwhelming, it makes excellent sense to look to others’ apparent beliefs as an indication of what is going on. In fact, this is often the most reasonable response, so long as we have good reason to believe that others have access to accurate information; and that what they seem to think and what they actually believe are the same.
  • Networks where members are, for example, randomly exposed to a range of views are less likely to experience cascades of unchallenged belief
  • And the more we understand the chain of events that led someone towards a particular perspective, the more we understand what it might mean to arrive at other views – or, equally importantly, to sow the seeds of sceptical engagement.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

The Philosopher Whose Fingerprints Are All Over the FTC's New Approach to Privacy - Ale... - 0 views

  • The standard explanation for privacy freakouts is that people get upset because they've "lost control" of data about themselves or there is simply too much data available. Nissenbaum argues that the real problem "is the inapproproriateness of the flow of information due to the mediation of technology." In her scheme, there are senders and receivers of messages, who communicate different types of information with very specific expectations of how it will be used. Privacy violations occur not when too much data accumulates or people can't direct it, but when one of the receivers or transmission principles change. The key academic term is "context-relative informational norms." Bust a norm and people get upset.
  • Nissenbaum gets us past thinking about privacy as a binary: either something is private or something is public. Nissenbaum puts the context -- or social situation -- back into the equation. What you tell your bank, you might not tell your doctor.
  • Furthermore, these differences in information sharing are not bad or good; they are just the norms.
  • ...8 more annotations...
  • any privacy regulation that's going to make it through Congress has to provide clear ways for companies to continue profiting from data tracking. The key is coming up with an ethical framework in which they can do so, and Nissenbaum may have done just that. 
  • The traditional model of how this works says that your information is something like a currency and when you visit a website that collects data on you for one reason or another, you enter into a contract with that site. As long as the site gives you "notice" that data collection occurs -- usually via a privacy policy located through a link at the bottom of the page -- and you give "consent" by continuing to use the site, then no harm has been done. No matter how much data a site collects, if all they do is use it to show you advertising they hope is more relevant to you, then they've done nothing wrong.
  • let companies do standard data collection but require them to tell people when they are doing things with data that are inconsistent with the "context of the interaction" between a company and a person.
  • How can anyone make a reasonable determination of how their information might be used when there are more than 50 or 100 or 200 tools in play on a single website in a single month?
  • Nissenbaum doesn't think it's possible to explain the current online advertising ecosystem in a useful way without resorting to a lot of detail. She calls this the "transparency paradox," and considers it insoluble.
  • she wants to import the norms from the offline world into the online world. When you go to a bank, she says, you have expectations of what might happen to your communications with that bank. That should be true whether you're online, on the phone, or at the teller.  Companies can use your data to do bank stuff, but they can't sell your data to car dealers looking for people with a lot of cash on hand.
  • Nevermind that if you actually read all the privacy policies you encounter in a year, it would take 76 work days. And that calculation doesn't even account for all the 3rd parties that drain data from your visits to other websites. Even more to the point: there is no obvious way to discriminate between two separate webpages on the basis of their data collection policies. While tools have emerged to tell you how many data trackers are being deployed at any site at a given moment, the dynamic nature of Internet advertising means that it is nearly impossible to know the story through time
  • here's the big downside: it rests on the "norms" that people expect. While that may be socially optimal, it's actually quite difficult to figure out what the norms for a given situation might be. After all, there is someone else who depends on norms for his thinking about privacy.
Javier E

SOPA Boycotts and the False Ideals of the Web - NYTimes.com - 1 views

  • Those rare tech companies that have come out in support of SOPA are not merely criticized but barred from industry events and subject to boycotts. We, the keepers of the flame of free speech, are banishing people for their speech. The result is a chilling atmosphere, with people afraid to speak their minds.
  • Our melodrama is driven by a vision of an open Internet that has already been distorted, though not by the old industries that fear piracy. For instance, until a year ago, I enjoyed a certain kind of user-generated content very much: I participated in forums in which musicians talked about musical instruments.
  • proprietary social networking — is ending my freedom to participate in the forums I used to love, at least on terms I accept. Like many other forms of contact, the musical conversations are moving into private sites, particularly Facebook. To continue to participate, I’d have to accept Facebook’s philosophy, under which it analyzes me, and is searching for new ways to charge third parties for the use of that analysis.
  • ...6 more annotations...
  • You might object that it’s all based on individual choice. That argument ignores the consequences of networks, and the way they function. After a certain point choice is reduced.
  • What if ordinary users routinely earned micropayments for their contributions? If all content were valued instead of only mogul content, perhaps an information economy would elevate success for all. But under the current terms of debate that idea can barely be whispered.
  • Once networks are established, it is hard to reduce their power. Google’s advertisers, for instance, know what will happen if they move away. The next-highest bidder for each position in Google’s auction-based model for selling ads will inherit that position if the top bidder goes elsewhere. So Google’s advertisers tend to stay put because the consequences of leaving are obvious to them
  • The obvious strategy in the fight for a piece of the advertising pie is to close off substantial parts of the Internet so Google doesn’t see it all anymore. That’s how Facebook hopes to make money, by sealing off a huge amount of user-generated information into a separate, non-Google world.
  • it’s not Facebook’s fault! We, the idealists, insisted that information be able to flow freely online, which meant that services relating to information, instead of the information itself, would be the main profit centers. Some businesses do sell content, but that doesn’t address the business side of everyday user-generated content. The adulation of “free content” inevitably meant that “advertising” would become the biggest business in the open part of the information economy
  • We in Silicon Valley undermined copyright to make commerce become more about services instead of content — more about our code instead of their files. The inevitable endgame was always that we would lose control of our own personal content, our own files.
Javier E

Big Think Interview With Nicholas Carr | Nicholas Carr | Big Think - 0 views

  • Neurologically, how does our brain adapt itself to new technologies? Nicholas Carr: A couple of types of adaptations take place in your brain. One is a strengthening of the synaptical connections between the neurons involved in using that instrument, in using that tool. And basically these are chemical – neural chemical changes. So you know, cells in our brain communicate by transmitting electrical signals between them and those electrical signals are actually activated by the exchange of chemicals, neurotransmitters in our synapses. And so when you begin to use a tool, for instance, you have much stronger electrochemical signals being processed in those – through those synaptical connections. And then the second, and even more interesting adaptation is in actual physical changes,anatomical changes. Your neurons, you may grow new neurons that are then recruited into these circuits or your existing neurons may grow new synaptical terminals. And again, that also serves to strengthen the activity in those, in those particular pathways that are being used – new pathways. On the other hand, you know, the brain likes to be efficient and so even as its strengthening the pathways you’re exercising, it’s pulling – it’s weakening the connections in other ways between the cells that supported old ways of thinking or working or behaving, or whatever that you’re not exercising so much.
  • And it was only in around the year 800 or 900 that we saw the introduction of word spaces. And suddenly reading became, in a sense, easier and suddenly you had to arrival of silent reading, which changed the act of reading from just transcription of speech to something that every individual did on their own. And suddenly you had this whole deal of the silent solitary reader who was improving their mind, expanding their horizons, and so forth. And when Guttenberg invented the printing press around 1450, what that served to do was take this new very attentive, very deep form of reading, which had been limited to just, you know, monasteries and universities, and by making books much cheaper and much more available, spread that way of reading out to a much larger mass of audience. And so we saw, for the last 500 years or so, one of the central facts of culture was deep solitary reading.
  • What the book does as a technology is shield us from distraction. The only thinggoing on is the, you know, the progression of words and sentences across page after page and so suddenly we see this immersive kind of very attentive thinking, whether you are paying attention to a story or to an argument, or whatever. And what we know about the brain is the brain adapts to these types of tools.
  • ...12 more annotations...
  • we adapt to the environment of the internet, which is an environment of kind of constant immersion and information and constant distractions, interruptions, juggling lots of messages, lots of bits of information.
  • Because it’s no longer just a matter of personal choice, of personal discipline, though obviously those things are always important, but what we’re seeing and we see this over and over again in the history of technology, is that the technology – the technology of the web, the technology of digital media, gets entwined very, very deeply into social processes, into expectations. So more and more, for instance in our work lives. You know, if our boss and all our colleagues are constantly exchanging messages, constantly checking email on their Blackberry or iPhone or their Droid or whatever, then it becomes very difficult to say, I’m not going to be as connected because you feel like you’re career is going to take a hit.
  • With the arrival – with the transfer now of text more and more onto screens, we see, I think, a new and in some ways more primitive way of reading. In order to take in information off a screen, when you are also being bombarded with all sort of other information and when there links in the text where you have to think even for just a fraction of a second, you know, do I click on this link or not. Suddenly reading again becomes a more cognitively intensive act, the way it was back when there were no spaces between words.
  • If all your friends are planning their social lives through texts and Facebook and Twitter and so forth, then to back away from that means to feel socially isolated. And of course for all people, particularly for young people, there’s kind of nothing worse than feeling socially isolated, that your friends are you know, having these conversations and you’re not involved. So it’s easy to say the solution, which is to, you know, becomes a little bit more disconnected. What’s hard it actually doing that.
  • if you want to change your brain, you change your habits. You change your habits of thinking. And that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking and that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking, to be – to screen out distractions. And that means retreating from digital media and from the web and from Smart Phones and texting and Facebook and Tweeting and everything else.
  • The Thinker was, you know, in a contemplative pose and was concentrating deeply, and wasn’t you know, multi-tasking. And because that is something that, until recently anyway, people always thought was the deepest and most distinctly human way of thinking.
  • we may end up finding that those are actually the most valuable ways of thinking that are available to us as human beings.
  • the ability to pay attention also is very important for our ability to build memories, to transfer information from our short-term memory to our long-term memory. And only when we do that do we weave new information into everything else we have stored in our brains. All the other facts we’ve learned, all the other experiences we’ve had, emotions we’ve felt. And that’s how you build, I think, a rich intellect and a rich intellectual life.
  • On the other hand, there is a cost. We lose – we begin to lose the facilities that we don’t exercise. So adaptation has both a very, very positive side, but also a potentially negative side because ultimately our brain is qualitatively neutral. It doesn’t pare what it’s strengthening or what it’s weakening, it just responds to the way we’re exercising our mind.
  • the book in some ways is the most interesting from our own present standpoint, particularly when we want to think about the way the internet is changing us. It’s interesting to think about how the book changed us.
  • So we become, after the arrival of the printing press in general, more attentive more attuned to contemplative ways of thinking. And that’s a very unnatural way of using our mind. You know, paying attention, filtering out distractions.
  • what we lose is the ability to pay deep attention to one thing for a sustained period of time, to filter out distractions.
Javier E

Health Facts Aren't Enough. Should Persuasion Become a Priority? - The New York Times - 0 views

  • In a paper published early this year in Nature Human Behavior, scientists asked 500 Americans what they thought about foods that contained genetically modified organisms.
  • The vast majority, more than 90 percent, opposed their use. This belief is in conflict with the consensus of scientists
  • The second finding of the study was more eye-opening. Those who were most opposed to genetically modified foods believed they were the most knowledgeable about this issue, yet scored the lowest on actual tests of scientific knowledge.
  • ...14 more annotations...
  • A small percentage of the public believes that vaccines are truly dangerous. People who hold this view — which is incorrect — also believe that they know more than experts about this topic.
  • the study was also conducted in France and Germany, with similar results.
  • those with the least understanding of science had the most science-opposed views, but thought they knew the most
  • the Dunning-Kruger effect, named for the two psychologists who wrote a seminal paper in 1999 entitled “Unskilled and Unaware of It.”
  • Most of them say they are unaffected by claims from experts contradicting the claims of manufacturers. Only a quarter said they would stop using supplements if experts said they were ineffective. They must think they know better.
  • Many Americans take supplements, but the reasons are varied and are not linked to any hard evidence
  • A lack of knowledge leaves some without the contextual information necessary to recognize mistakes, they wrote, and their “incompetence robs them of the ability to realize it.”
  • communication strategies on G.M.O.s — intended to help the public see that their beliefs did not align with experts — wound up backfiring
  • attempting to provide corrective information to voters about death panels wound up increasing their belief in them among politically knowledgeable supporters of Sarah Palin.
  • A 2015 study published in Vaccine showed that giving corrective information about the flu vaccine led patients most concerned about side effects to be less likely to get the vaccine.
  • “knowledge deficit model,” an idea that the lack of support for good policies, and good science, merely reflects a lack of scientific information.
  • Scientists need to be formally trained in communication skills, they said, and they also need to realize that the knowledge deficit model makes for easy policy, but not necessarily good results.
  • It seems important to engage the public more, and earn their trust through continued, more personal interaction, using many different platforms and technologies
  • Bombarding people with more information about studies isn’t helping. How the information contained in them is disseminated and discussed may be much more important.
Javier E

The Science of Snobbery: How We're Duped Into Thinking Fancy Things Are Better - The At... - 0 views

  • Expert judges and amateurs alike claim to judge classical musicians based on sound. But Tsay’s research suggests that the original judges, despite their experience and expertise, judged the competition (which they heard and watched live) based on visual information, just as amateurs do.
  • just like with classical music, we do not appraise wine in the way that we expect. 
  • Priceonomics revisited this seemingly damning research: the lack of correlation between wine enjoyment and price in blind tastings, the oenology students tricked by red food dye into describing a white wine like a red, a distribution of medals at tastings equivalent to what one would expect from pure chance, the grand crus described like cheap wines and vice-versa when the bottles are switched.
  • ...26 more annotations...
  • Taste does not simply equal your taste buds. It draws on information from all our senses as well as context. As a result, food is susceptible to the same trickery as wine. Adding yellow food dye to vanilla pudding leads people to experience a lemony taste. Diners eating in the dark at a chic concept restaurant confuse veal for tuna. Branding, packaging, and price tags are equally important to enjoyment. Cheap fish is routinely passed off as its pricier cousins at seafood and sushi restaurants. 
  • Just like with wine and classical music, we often judge food based on very different criteria than what we claim. The result is that our perceptions are easily skewed in ways we don’t anticipate. 
  • What does it mean for wine that presentation so easily trumps the quality imbued by being grown on premium Napa land or years of fruitful aging? Is it comforting that the same phenomenon is found in food and classical music, or is it a strike against the authenticity of our enjoyment of them as well? How common must these manipulations be until we concede that the influence of the price tag of a bottle of wine or the visual appearance of a pianist is not a trick but actually part of the quality?
  • To answer these questions, we need to investigate the underlying mechanism that leads us to judge wine, food, and music by criteria other than what we claim to value. And that mechanism seems to be the quick, intuitive judgments our minds unconsciously make
  • this unknowability also makes it easy to be led astray when our intuition makes a mistake. We may often be able to count on the price tag or packaging of food and wine for accurate information about quality. But as we believe that we’re judging based on just the product, we fail to recognize when presentation manipulates our snap judgments.
  • Participants were just as effective when watching 6 second video clips and when comparing their ratings to ratings of teacher effectiveness as measured by actual student test performance. 
  • The power of intuitive first impressions has been demonstrated in a variety of other contexts. One experiment found that people predicted the outcome of political elections remarkably well based on silent 10 second video clips of debates - significantly outperforming political pundits and predictions made based on economic indicators.
  • In a real world case, a number of art experts successfully identified a 6th century Greek statue as a fraud. Although the statue had survived a 14 month investigation by a respected museum that included the probings of a geologist, they instantly recognized something was off. They just couldn’t explain how they knew.
  • Cases like this represent the canon behind the idea of the “adaptive unconscious,” a concept made famous by journalist Malcolm Gladwell in his book Blink. The basic idea is that we constantly, quickly, and unconsciously do the equivalent of judging a book by its cover. After all, a cover provides a lot of relevant information in a world in which we don’t have time to read every page.
  • Gladwell describes the adaptive unconscious as “a kind of giant computer that quickly and quietly processes a lot of the data we need in order to keep functioning as human beings.”
  • In a famous experiment, psychologist Nalini Ambady provided participants in an academic study with 30 second silent video clips of a college professor teaching a class and asked them to rate the effectiveness of the professor.
  • In follow up experiments, Chia-Jung Tsay found that those judging musicians’ auditions based on visual cues were not giving preference to attractive performers. Rather, they seemed to look for visual signs of relevant characteristics like passion, creativity, and uniqueness. Seeing signs of passion is valuable information. But in differentiating between elite performers, it gives an edge to someone who looks passionate over someone whose play is passionate
  • Outside of these more eccentric examples, it’s our reliance on quick judgments, and ignorance of their workings, that cause people to act on ugly, unconscious biases
  • It’s also why - from a business perspective - packaging and presentation is just as important as the good or service on offer. Why marketing is just as important as product. 
  • Gladwell ends Blink optimistically. By paying closer attention to our powers of rapid cognition, he argues, we can avoid its pitfalls and harness its powers. We can blindly audition musicians behind a screen, look at a piece of art devoid of other context, and pay particular attention to possible unconscious bias in our performance reports.
  • But Gladwell’s success in demonstrating how the many calculations our adaptive unconscious performs without our awareness undermines his hopeful message of consciously harnessing its power.
  • As a former world-class tennis player and coach of over 50 years, Braden is a perfect example of the ideas behind thin slicing. But if he can’t figure out what his unconscious is up to when he recognizes double faults, why should anyone else expect to be up to the task?
  • flawed judgment in fields like medicine and investing has more serious consequences. The fact that expertise is so tricky leads psychologist Daniel Kahneman to assert that most experts should seek the assistance of statistics and algorithms in making decisions.
  • In his book Thinking, Fast and Slow, he describes our two modes of thought: System 1, like the adaptive unconscious, is our “fast, instinctive, and emotional” intuition. System 2 is our “slower, more deliberative, and more logical” conscious thought. Kahneman believes that we often leave decisions up to System 1 and generally place far “too much confidence in human judgment” due to the pitfalls of our intuition described above.
  • Not every judgment will be made in a field that is stable and regular enough for an algorithm to help us make judgments or predictions. But in those cases, he notes, “Hundreds of studies have shown that wherever we have sufficient information to build a model, it will perform better than most people.”
  • Experts can avoid the pitfalls of intuition more easily than laypeople. But they need help too, especially as our collective confidence in expertise leads us to overconfidence in their judgments. 
  • This article has referred to the influence of price tags and context on products and experiences like wine and classical music concerts as tricks that skew our perception. But maybe we should consider them a real, actual part of the quality.
  • Losing ourselves in a universe of relativism, however, will lead us to miss out on anything new or unique. Take the example of the song “Hey Ya!” by Outkast. When the music industry heard it, they felt sure it would be a hit. When it premiered on the radio, however, listeners changed the channel. The song sounded too dissimilar from songs people liked, so they responded negatively. 
  • It took time for people to get familiar with the song and realize that they enjoyed it. Eventually “Hey Ya!” became the hit of the summer.
  • Many boorish people talking about the ethereal qualities of great wine probably can't even identify cork taint because their impressions are dominated by the price tag and the wine label. But the classic defense of wine - that you need to study it to appreciate it - is also vindicated. The open question - which is both editorial and empiric - is what it means for the industry that constant vigilance and substantial study is needed to dependably appreciate wine for the product quality alone. But the questions is relevant to the enjoyment of many other products and experiences that we enjoy in life.
  • Maybe the most important conclusion is to not only recognize the fallibility of our judgments and impressions, but to recognize when it matters, and when it doesn’t
katedriscoll

Frontiers | A Digital Nudge to Counter Confirmation Bias | Big Data - 1 views

  • Information disorder in current information ecosystems arises not only from the publication of “fake news,” but also from individuals' subjective reading of news and from their propagating news to others. Sometimes the difference between real and fake information is apparent. However, often a message is written to evoke certain emotions and opinions by taking partially true base stories and injecting false statements such that the information looks realistic. In addition, the perception of the trustworthiness of news is often influenced by confirmation bias. As a result, people often believe distorted or outright incorrect news and spread such misinformation further. For example, it was shown that in the months preceding the 2016 American presidential election, organizations from both Russia and Iran ran organized efforts to create such stories and spread them on Twitter and Facebook (Cohen, 2018). It is therefore important to raise internet users' awareness of such practices. Key to this is providing users with means to understand whether information should be trusted or not.
  • In this section, we discuss how social networks increase the spread of biased news and misinformation. We discuss confirmation bias, echo chambers and other factors that may subconsciously influence a person's opinion. We show how these processes can interact to form a vicious circle that favors the rise of untrustworthy sources. Often, when an individual thinks they know something, they are satisfied by an explanation that confirms their belief, without necessarily considering all possible other explanations, and regardless of the veracity of this information. This is confirmation bias in action. Nickerson (1998) defined it as the tendency of people to both seek and interpret evidence that supports an already-held belief.
ilanaprincilus06

How Knowledge Changes Us - The New York Times - 1 views

  • They become the center of every room they enter, with all the attendant narcissism. They also have inside information, and often leap to the conclusion that people who don’t have this information are simply not worth listening to.
  • They become the center of every room they enter, with all the attendant narcissism. They also have inside information, and often leap to the conclusion that people who don’t have this information are simply not worth listening to.
    • ilanaprincilus06
       
      Logical fallacy of appealing to authority. Their narcissism will continue to bias their mind with the idea that they will always be seen by others as correct just because they have more power over them.
  • “First, you’ll be exhilarated by some of this new information, and by having it all — so much! incredible! — suddenly available to you.
    • ilanaprincilus06
       
      Relates to our memory recollection. Shortly after experiencing something new or exciting, our emotions usually project this with happiness and other good feelings
  • ...3 more annotations...
  • Still, there are psychic effects that come from having this information. I have never seen them so perfectly expressed as by Daniel Ellsberg in a speech he supposedly gave to Henry Kissinger in 1968 as Kissinger was entering the government.
    • ilanaprincilus06
       
      One affect could be mind deception. Over time, the secret is more likely to be altered based on certain experiences/actions that the secret holder partakes. This will leave a negative impact on the true nature of the secret.
  • you will forget there ever was a time when you didn’t have it, and you’ll be aware only of the fact that you have it now and most others don’t
    • ilanaprincilus06
       
      Relates to theory of mind. The secret constantly reminds the secret holder of the disparity between their understanding of the secret and other's unknowingness to the secret
  • it’s often inaccurate
    • ilanaprincilus06
       
      The effects that the brain leaves on long-term memory. The premise of the secret may still be true, but the conclusion and other evidence is most likely skewed
ilanaprincilus06

How the web distorts reality and impairs our judgement skills | Media Network | The Gua... - 0 views

  • IBM estimates that 90% of the world's online data has been created just in the past two years. What's more, it has made information more accessible than ever before.
  • However, rather than enhancing knowledge, the internet has produced an information glut or "infoxication".
  • Furthermore, since online content is often curated to fit our preferences, interests and personality, the internet can even enhance our existing biases and undermine our motivation to learn new things.
    • ilanaprincilus06
       
      When we see our preferences constantly being displayed, we are more likely to go back to wherever the information was or utilize that source, website, etc more often.
  • ...14 more annotations...
  • these filters will isolate people in information bubbles only partly of their own choosing, and the inaccurate beliefs they form as a result may be difficult to correct."
  • the proliferation of search engines, news aggregators and feed-ranking algorithms is more likely to perpetuate ignorance than knowledge.
  • It would seem that excessive social media use may intensify not only feelings of loneliness, but also ideological isolation.
    • ilanaprincilus06
       
      Would social media networks need to stop exploiting these preferences in order for us to limit ideological isolation?
  • "What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact."
  • Recent studies show that although most people consume information that matches their opinions, being exposed to conflicting views tends to reduce prejudice and enhance creative thinking.
  • the desire to prove ourselves right and maintain our current beliefs trumps any attempt to be creative or more open-minded.
  • "our objects of inquiry are not 'truth' or 'meaning' but rather configurations of consciousness. These are figures or patterns of knowledge, cognitive and practical attitudes, which emerge within a definite historical and cultural context."
  • the internet is best understood as a cultural lens through which we construct – or distort – reality.
  • we can only deal with this overwhelming range of choices by ignoring most of them.
  • trolling is so effective for enticing readers' comments, but so ineffective for changing their viewpoints.
  • Will accumulating facts help you understand the world?
    • ilanaprincilus06
       
      We must take an extra step past just reading/learning about facts and develop second order thinking about the claims/facts to truly gain a better sense of what is going on.
  • we have developed a dependency on technology, which has eclipsed our reliance on logic, critical thinking and common sense: if you can find the answer online, why bother thinking?
  • it is conceivable that individuals' capacity to evaluate and produce original knowledge will matter more than the actual acquisition of knowledge.
  • Good judgment and decision-making will be more in demand than sheer expertise or domain-specific knowledge.
Javier E

The "missing law" of nature was here all along | Salon.com - 0 views

  • recently published scientific article proposes a sweeping new law of nature, approaching the matter with dry, clinical efficiency that still reads like poetry.
  • “Evolving systems are asymmetrical with respect to time; they display temporal increases in diversity, distribution, and/or patterned behavior,” they continue, mounting their case from the shoulders of Charles Darwin, extending it toward all things living and not. 
  • To join the known physics laws of thermodynamics, electromagnetism and Newton’s laws of motion and gravity, the nine scientists and philosophers behind the paper propose their “law of increasing functional information.”
  • ...27 more annotations...
  • In short, a complex and evolving system — whether that’s a flock of gold finches or a nebula or the English language — will produce ever more diverse and intricately detailed states and configurations of itself.
  • Some of these more diverse and intricate configurations, the scientists write, are shed and forgotten over time. The configurations that persist are ones that find some utility or novel function in a process akin to natural selection, but a selection process driven by the passing-on of information rather than just the sowing of biological genes
  • Have they finally glimpsed, I wonder, the connectedness and symbiotic co-evolution of their own scientific ideas with those of the world’s writers
  • Have they learned to describe in their own quantifying language that cradle from which both our disciplines have emerged and the firmament on which they both stand — the hearing and telling of stories in order to exist?
  • Have they quantified the quality of all existent matter, living and not: that all things inherit a story in data to tell, and that our stories are told by the very forms we take to tell them? 
  • “Is there a universal basis for selection? Is there a more quantitative formalism underlying this conjectured conceptual equivalence—a formalism rooted in the transfer of information?,” they ask of the world’s disparate phenomena. “The answer to both questions is yes.”
  • Yes. They’ve glimpsed it, whether they know it or not. Sing to me, O Muse, of functional information and its complex diversity.
  • The principle of complexity evolving at its own pace when left to its own devices, independent of time but certainly in a dance with it, is nothing new. Not in science, nor in its closest humanities kin, science and nature writing. Give things time and nourishing environs, protect them from your own intrusions and — living organisms or not — they will produce abundant enlacement of forms.
  • This is how poetry was born from the same larynxes and phalanges that tendered nuclear equations: We featherless bipeds gave language our time and delighted attendance until its forms were so multivariate that they overflowed with inevitable utility.
  • In her Pulitzer-winning “Pilgrim at Tinker Creek,” nature writer Annie Dillard explains plainly that evolution is the vehicle of such intricacy in the natural world, as much as it is in our own thoughts and actions. 
  • “The stability of simple forms is the sturdy base from which more complex, stable forms might arise, forming in turn more complex forms,” she explains, drawing on the undercap frills of mushrooms and filament-fine filtering tubes inside human kidneys to illustrate her point. 
  • “Utility to the creature is evolution’s only aesthetic consideration. Form follows function in the created world, so far as I know, and the creature that functions, however bizarre, survives to perpetuate its form,” writes Dillard.
  • “Of the multiplicity of forms, I know nothing. Except that, apparently, anything goes. This holds for forms of behavior as well as design — the mantis munching her mate, the frog wintering in mud.” 
  • She notes that, of all forms of life we’ve ever known to exist, only about 10% are still alive. What extravagant multiplicity. 
  • “Intricacy is that which is given from the beginning, the birthright, and in the intricacy is the hardiness of complexity that ensures against the failures of all life,” Dillard writes. “The wonder is — given the errant nature of freedom and the burgeoning texture of time — the wonder is that all the forms are not monsters, that there is beauty at all, grace gratuitous.”
  • “This paper, and the reason why I'm so proud of it, is because it really represents a connection between science and the philosophy of science that perhaps offers a new lens into why we see everything that we see in the universe,” lead scientist Michael Wong told Motherboard in a recent interview. 
  • Wong is an astrobiologist and planetary scientist at the Carnegie Institute for Science. In his team’s paper, that bridge toward scientific philosophy is not only preceded by a long history of literary creativity but directly theorizes about the creative act itself.  
  • “The creation of art and music may seem to have very little to do with the maintenance of society, but their origins may stem from the need to transmit information and create bonds among communities, and to this day, they enrich life in innumerable ways,” Wong’s team writes.  
  • “Perhaps, like eddies swirling off of a primary flow field, selection pressures for ancillary functions can become so distant from the core functions of their host systems that they can effectively be treated as independently evolving systems,” the authors add, pointing toward the elaborate mating dance culture observed in birds of paradise.
  • “Perhaps it will be humanity’s ability to learn, invent, and adopt new collective modes of being that will lead to its long-term persistence as a planetary phenomenon. In light of these considerations, we suspect that the general principles of selection and function discussed here may also apply to the evolution of symbolic and social systems.”
  • The Mekhilta teaches that all Ten Commandments were pronounced in a single utterance. Similarly, the Maharsha says the Torah’s 613 mitzvoth are only perceived as a plurality because we’re time-bound humans, even though they together form a singular truth which is indivisible from He who expressed it. 
  • Or, as the Mishna would have it, “the creations were all made in generic form, and they gradually expanded.” 
  • Like swirling eddies off of a primary flow field.
  • “O Lord, how manifold are thy works!,” cried out David in his psalm. “In wisdom hast thou made them all: the earth is full of thy riches. So is this great and wide sea, wherein are things creeping innumerable, both small and great beasts.” 
  • In all things, then — from poetic inventions, to rare biodiverse ecosystems, to the charted history of our interstellar equations — it is best if we conserve our world’s intellectual and physical diversity, for both the study and testimony of its immeasurable multiplicity.
  • Because, whether wittingly or not, science is singing the tune of the humanities. And whether expressed in algebraic logic or ancient Greek hymn, its chorus is the same throughout the universe: Be fruitful and multiply. 
  • Both intricate configurations of art and matter arise and fade according to their shared characteristic, long-known by students of the humanities: each have been graced with enough time to attend to the necessary affairs of their most enduring pleasures. 
‹ Previous 21 - 40 of 896 Next › Last »
Showing 20 items per page