Skip to main content

Home/ TOK Friends/ Group items tagged data mining

Rss Feed Group items tagged

Javier E

A Curious Midlife Crisis for a Tech Entrepreneur - The New York Times - 0 views

  • as he approached 40, Fabrice Grinda, a French technology entrepreneur with an estimated net worth of $100 million, couldn’t shake the feeling that something was terribly wrong. Somehow the trappings of his success were weighing him down.
  • “People turn 40 and usually buy a shiny sports car,” Mr. Grinda said during an interview in a penthouse suite at Sixty LES, a downtown boutique hotel. “They don’t say, ‘I’m downsizing my life and giving up all my possessions to focus on experiences and friendships.’
  • He dubbed it “the very big downgrade”: He was going to travel the world, working on the fly while staying with friends and family. He was purposely arranging things so that he would have a chance to focus on what was meaningful in life.
  • ...13 more annotations...
  • But that is exactly what Mr. Grinda did. He moved out of the Bedford house in December 2012, ditched the city apartment and got rid of the McLaren. He donated clothes, sports equipment and kitchen utensils to the Church of St. Francis Xavier in Lower Manhattan. He gave his furniture to Housing Works and he packed a Tumi carry-on suitcase with 50 items, including two pairs of jeans, a bathing suit and 10 pairs of socks.
  • Once he realized his days as a roving houseguest were numbered, Mr. Grinda decided to shift his approach: He kept traveling, but now he was renting apartments on Airbnb or staying in luxury hotels.
  • Born in suburban Paris in 1974, Mr. Grinda graduated from Princeton in 1996 with a degree in economics. He worked as a consultant at McKinsey & Company for two years before moving back to France to found an online auction start-up funded by the business magnate Bernard Arnault, which Mr. Grinda sold in 2000.He returned to the United States, where he co-founded Zingy, a mobile phone ringtone and game maker, which fetched $80 million in a 2004 sale. After that, he was a founder of OLX, a Craigslist-like service that has become one of the largest global classified websites.Now he is an entrepreneur and angel investor, with more than 200 investments to date, who visits start-ups in Berlin, Paris, New York, San Francisco and other cities.
  • He looks (and acts) something like Sheldon Cooper, the oddball science geek played by Jim Parsons on “The Big Bang Theory,” an observation Mr. Grinda himself has made.“Friends, who knew me in my late teens and early twenties, would tell you I had exactly the same delusional sense of self-worth and condescending and arrogant self-centered worldview,” he wrote in a blog post that noted his similarities to the sitcom character.
  • In all, Mr. Grinda said, he stayed with about 15 friends and family members in the first months of 2013. “Everyone was, like, ‘It’s a great idea. Come over,’ ” Mr. Grinda said. “The problem is, the idea of ‘Great, come over’ and me there 24 hours a day, seven days a week, is very different. Especially when their lives are not in sync with mine.”
  • “When I looked back at the things that mattered the most to me,” he said, “they were experiences, friendships and family — none of which I had invested much in, partly because I was too busy, and partly because I felt anchored by my possessions.”
  • He hatched a new plan: His friends and family members would come to him.“Rather than me going to them and disrupting their routine,” he said, “getting everyone together in a setting of vacation makes more sense.”
  • He invited his parents, his friends, their partners, children and nannies for a two-week stay in Anguilla, an island east of Puerto Rico, where he rented two conjoining houses, at a cost of $240,000, with chefs and full house service (and a total of 19 bedrooms).
  • Mr. Grinda forgot to consider that not everyone lives as he does.For one thing, he had scheduled the Anguilla vacation during the school year, which meant friends with children couldn’t make it. The island’s remoteness, furthermore, meant some guests were forced to endure a tangle of flight connections, leaving some of them exhausted by the time they arrived.And many of the people he invited, who had jobs and other obligations, could stay only for a long weekend.
  • Mr. Grinda said he has learned a lot from his very big downgrade. He reconnected with old friends, even if it meant annoying them a little, and he rekindled his relationship with his father.“We spent time talking about his life,” he said. And he is no longer against the idea of having a fixed address; he said he is now in negotiations to buy a two-bedroom apartment on the Lower East Side, which he plans to rent out when he is not in town.
  • Still, the experiment has taken its toll. “The philosophy is interesting,” he said. “But how do you put it into practice? How do you make it real?”
  • He recently split up with Otilia Aionesei, a former model who works at technology start-up, whom he had been dating, off and on, for two years. The sticking point was their lack of a shared home.“If you want to be his girlfriend, this is the life you have to lead,” Ms. Aionesei said. “I like simple things, to watch movies on the same couch.”Mr. Grinda had a different view. “We went to the Galápagos,” he said. “We went to Tulum. To St. Barts. We have these wonderful experiences and memories together.”
  • “My home is where I am,” he said. “And it doesn’t matter if it is a friend’s place or a couch or the middle of the jungle or a hotel room on the Lower East Side. But I realize that most of humanity, especially women, don’t see it that way.”
Javier E

The Tech Industry's Psychological War on Kids - Member Feature Stories - Medium - 0 views

  • she cried, “They took my f***ing phone!” Attempting to engage Kelly in conversation, I asked her what she liked about her phone and social media. “They make me happy,” she replied.
  • Even though they were loving and involved parents, Kelly’s mom couldn’t help feeling that they’d failed their daughter and must have done something terribly wrong that led to her problems.
  • My practice as a child and adolescent psychologist is filled with families like Kelly’s. These parents say their kids’ extreme overuse of phones, video games, and social media is the most difficult parenting issue they face — and, in many cases, is tearing the family apart.
  • ...88 more annotations...
  • What none of these parents understand is that their children’s and teens’ destructive obsession with technology is the predictable consequence of a virtually unrecognized merger between the tech industry and psychology.
  • Dr. B.J. Fogg, is a psychologist and the father of persuasive technology, a discipline in which digital machines and apps — including smartphones, social media, and video games — are configured to alter human thoughts and behaviors. As the lab’s website boldly proclaims: “Machines designed to change humans.”
  • These parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit.
  • psychology — a discipline that we associate with healing — is now being used as a weapon against children.
  • This alliance pairs the consumer tech industry’s immense wealth with the most sophisticated psychological research, making it possible to develop social media, video games, and phones with drug-like power to seduce young users.
  • Likewise, social media companies use persuasive design to prey on the age-appropriate desire for preteen and teen kids, especially girls, to be socially successful. This drive is built into our DNA, since real-world relational skills have fostered human evolution.
  • Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”
  • Persuasive technology (also called persuasive design) works by deliberately creating digital environments that users feel fulfill their basic human drives — to be social or obtain goals — better than real-world alternatives.
  • Kids spend countless hours in social media and video game environments in pursuit of likes, “friends,” game points, and levels — because it’s stimulating, they believe that this makes them happy and successful, and they find it easier than doing the difficult but developmentally important activities of childhood.
  • While persuasion techniques work well on adults, they are particularly effective at influencing the still-maturing child and teen brain.
  • “Video games, better than anything else in our culture, deliver rewards to people, especially teenage boys,” says Fogg. “Teenage boys are wired to seek competency. To master our world and get better at stuff. Video games, in dishing out rewards, can convey to people that their competency is growing, you can get better at something second by second.”
  • it’s persuasive design that’s helped convince this generation of boys they are gaining “competency” by spending countless hours on game sites, when the sad reality is they are locked away in their rooms gaming, ignoring school, and not developing the real-world competencies that colleges and employers demand.
  • Persuasive technologies work because of their apparent triggering of the release of dopamine, a powerful neurotransmitter involved in reward, attention, and addiction.
  • As she says, “If you don’t get 100 ‘likes,’ you make other people share it so you get 100…. Or else you just get upset. Everyone wants to get the most ‘likes.’ It’s like a popularity contest.”
  • there are costs to Casey’s phone obsession, noting that the “girl’s phone, be it Facebook, Instagram or iMessage, is constantly pulling her away from her homework, sleep, or conversations with her family.
  • Casey says she wishes she could put her phone down. But she can’t. “I’ll wake up in the morning and go on Facebook just… because,” she says. “It’s not like I want to or I don’t. I just go on it. I’m, like, forced to. I don’t know why. I need to. Facebook takes up my whole life.”
  • B.J. Fogg may not be a household name, but Fortune Magazine calls him a “New Guru You Should Know,” and his research is driving a worldwide legion of user experience (UX) designers who utilize and expand upon his models of persuasive design.
  • “No one has perhaps been as influential on the current generation of user experience (UX) designers as Stanford researcher B.J. Fogg.”
  • the core of UX research is about using psychology to take advantage of our human vulnerabilities.
  • As Fogg is quoted in Kosner’s Forbes article, “Facebook, Twitter, Google, you name it, these companies have been using computers to influence our behavior.” However, the driving force behind behavior change isn’t computers. “The missing link isn’t the technology, it’s psychology,” says Fogg.
  • UX researchers not only follow Fogg’s design model, but also his apparent tendency to overlook the broader implications of persuasive design. They focus on the task at hand, building digital machines and apps that better demand users’ attention, compel users to return again and again, and grow businesses’ bottom line.
  • the “Fogg Behavior Model” is a well-tested method to change behavior and, in its simplified form, involves three primary factors: motivation, ability, and triggers.
  • “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.”
  • Regarding ability, Fogg suggests that digital products should be made so that users don’t have to “think hard.” Hence, social networks are designed for ease of use
  • Finally, Fogg says that potential users need to be triggered to use a site. This is accomplished by a myriad of digital tricks, including the sending of incessant notifications
  • moral questions about the impact of turning persuasive techniques on children and teens are not being asked. For example, should the fear of social rejection be used to compel kids to compulsively use social media? Is it okay to lure kids away from school tasks that demand a strong mental effort so they can spend their lives on social networks or playing video games that don’t make them think much at all?
  • Describing how his formula is effective at getting people to use a social network, the psychologist says in an academic paper that a key motivator is users’ desire for “social acceptance,” although he says an even more powerful motivator is the desire “to avoid being socially rejected.”
  • the startup Dopamine Labs boasts about its use of persuasive techniques to increase profits: “Connect your app to our Persuasive AI [Artificial Intelligence] and lift your engagement and revenue up to 30% by giving your users our perfect bursts of dopamine,” and “A burst of Dopamine doesn’t just feel good: it’s proven to re-wire user behavior and habits.”
  • Ramsay Brown, the founder of Dopamine Labs, says in a KQED Science article, “We have now developed a rigorous technology of the human mind, and that is both exciting and terrifying. We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behavior in ways that, unbeknownst to them, feel second-nature but are really by design.”
  • Programmers call this “brain hacking,” as it compels users to spend more time on sites even though they mistakenly believe it’s strictly due to their own conscious choices.
  • Banks of computers employ AI to “learn” which of a countless number of persuasive design elements will keep users hooked
  • A persuasion profile of a particular user’s unique vulnerabilities is developed in real time and exploited to keep users on the site and make them return again and again for longer periods of time. This drives up profits for consumer internet companies whose revenue is based on how much their products are used.
  • “The leaders of Internet companies face an interesting, if also morally questionable, imperative: either they hijack neuroscience to gain market share and make large profits, or they let competitors do that and run away with the market.”
  • Social media and video game companies believe they are compelled to use persuasive technology in the arms race for attention, profits, and survival.
  • Children’s well-being is not part of the decision calculus.
  • one breakthrough occurred in 2017 when Facebook documents were leaked to The Australian. The internal report crafted by Facebook executives showed the social network boasting to advertisers that by monitoring posts, interactions, and photos in real time, the network is able to track when teens feel “insecure,” “worthless,” “stressed,” “useless” and a “failure.”
  • The report also bragged about Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.”
  • These design techniques provide tech corporations a window into kids’ hearts and minds to measure their particular vulnerabilities, which can then be used to control their behavior as consumers. This isn’t some strange future… this is now.
  • The official tech industry line is that persuasive technologies are used to make products more engaging and enjoyable. But the revelations of industry insiders can reveal darker motives.
  • Revealing the hard science behind persuasive technology, Hopson says, “This is not to say that players are the same as rats, but that there are general rules of learning which apply equally to both.”
  • After penning the paper, Hopson was hired by Microsoft, where he helped lead the development of the Xbox Live, Microsoft’s online gaming system
  • “If game designers are going to pull a person away from every other voluntary social activity or hobby or pastime, they’re going to have to engage that person at a very deep level in every possible way they can.”
  • This is the dominant effect of persuasive design today: building video games and social media products so compelling that they pull users away from the real world to spend their lives in for-profit domains.
  • Persuasive technologies are reshaping childhood, luring kids away from family and schoolwork to spend more and more of their lives sitting before screens and phones.
  • “Since we’ve figured to some extent how these pieces of the brain that handle addiction are working, people have figured out how to juice them further and how to bake that information into apps.”
  • Today, persuasive design is likely distracting adults from driving safely, productive work, and engaging with their own children — all matters which need urgent attention
  • Still, because the child and adolescent brain is more easily controlled than the adult mind, the use of persuasive design is having a much more hurtful impact on kids.
  • But to engage in a pursuit at the expense of important real-world activities is a core element of addiction.
  • younger U.S. children now spend 5 ½ hours each day with entertainment technologies, including video games, social media, and online videos.
  • Even more, the average teen now spends an incredible 8 hours each day playing with screens and phones
  • U.S. kids only spend 16 minutes each day using the computer at home for school.
  • Quietly, using screens and phones for entertainment has become the dominant activity of childhood.
  • Younger kids spend more time engaging with entertainment screens than they do in school
  • teens spend even more time playing with screens and phones than they do sleeping
  • kids are so taken with their phones and other devices that they have turned their backs to the world around them.
  • many children are missing out on real-life engagement with family and school — the two cornerstones of childhood that lead them to grow up happy and successful
  • persuasive technologies are pulling kids into often toxic digital environments
  • A too frequent experience for many is being cyberbullied, which increases their risk of skipping school and considering suicide.
  • And there is growing recognition of the negative impact of FOMO, or the fear of missing out, as kids spend their social media lives watching a parade of peers who look to be having a great time without them, feeding their feelings of loneliness and being less than.
  • The combined effects of the displacement of vital childhood activities and exposure to unhealthy online environments is wrecking a generation.
  • as the typical age when kids get their first smartphone has fallen to 10, it’s no surprise to see serious psychiatric problems — once the domain of teens — now enveloping young kids
  • Self-inflicted injuries, such as cutting, that are serious enough to require treatment in an emergency room, have increased dramatically in 10- to 14-year-old girls, up 19% per year since 2009.
  • While girls are pulled onto smartphones and social media, boys are more likely to be seduced into the world of video gaming, often at the expense of a focus on school
  • it’s no surprise to see this generation of boys struggling to make it to college: a full 57% of college admissions are granted to young women compared with only 43% to young men.
  • Economists working with the National Bureau of Economic Research recently demonstrated how many young U.S. men are choosing to play video games rather than join the workforce.
  • The destructive forces of psychology deployed by the tech industry are making a greater impact on kids than the positive uses of psychology by mental health providers and child advocates. Put plainly, the science of psychology is hurting kids more than helping them.
  • Hope for this wired generation has seemed dim until recently, when a surprising group has come forward to criticize the tech industry’s use of psychological manipulation: tech executives
  • Tristan Harris, formerly a design ethicist at Google, has led the way by unmasking the industry’s use of persuasive design. Interviewed in The Economist’s 1843 magazine, he says, “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”
  • Marc Benioff, CEO of the cloud computing company Salesforce, is one of the voices calling for the regulation of social media companies because of their potential to addict children. He says that just as the cigarette industry has been regulated, so too should social media companies. “I think that, for sure, technology has addictive qualities that we have to address, and that product designers are working to make those products more addictive, and we need to rein that back as much as possible,”
  • “If there’s an unfair advantage or things that are out there that are not understood by parents, then the government’s got to come forward and illuminate that.”
  • Since millions of parents, for example the parents of my patient Kelly, have absolutely no idea that devices are used to hijack their children’s minds and lives, regulation of such practices is the right thing to do.
  • Another improbable group to speak out on behalf of children is tech investors.
  • How has the consumer tech industry responded to these calls for change? By going even lower.
  • Facebook recently launched Messenger Kids, a social media app that will reach kids as young as five years old. Suggestive that harmful persuasive design is now honing in on very young children is the declaration of Messenger Kids Art Director, Shiu Pei Luu, “We want to help foster communication [on Facebook] and make that the most exciting thing you want to be doing.”
  • the American Psychological Association (APA) — which is tasked with protecting children and families from harmful psychological practices — has been essentially silent on the matter
  • APA Ethical Standards require the profession to make efforts to correct the “misuse” of the work of psychologists, which would include the application of B.J. Fogg’s persuasive technologies to influence children against their best interests
  • Manipulating children for profit without their own or parents’ consent, and driving kids to spend more time on devices that contribute to emotional and academic problems is the embodiment of unethical psychological practice.
  • “Never before in history have basically 50 mostly men, mostly 20–35, mostly white engineer designer types within 50 miles of where we are right now [Silicon Valley], had control of what a billion people think and do.”
  • Some may argue that it’s the parents’ responsibility to protect their children from tech industry deception. However, parents have no idea of the powerful forces aligned against them, nor do they know how technologies are developed with drug-like effects to capture kids’ minds
  • Others will claim that nothing should be done because the intention behind persuasive design is to build better products, not manipulate kids
  • similar circumstances exist in the cigarette industry, as tobacco companies have as their intention profiting from the sale of their product, not hurting children. Nonetheless, because cigarettes and persuasive design predictably harm children, actions should be taken to protect kids from their effects.
  • in a 1998 academic paper, Fogg describes what should happen if things go wrong, saying, if persuasive technologies are “deemed harmful or questionable in some regard, a researcher should then either take social action or advocate that others do so.”
  • I suggest turning to President John F. Kennedy’s prescient guidance: He said that technology “has no conscience of its own. Whether it will become a force for good or ill depends on man.”
  • The APA should begin by demanding that the tech industry’s behavioral manipulation techniques be brought out of the shadows and exposed to the light of public awareness
  • Changes should be made in the APA’s Ethics Code to specifically prevent psychologists from manipulating children using digital machines, especially if such influence is known to pose risks to their well-being.
  • Moreover, the APA should follow its Ethical Standards by making strong efforts to correct the misuse of psychological persuasion by the tech industry and by user experience designers outside the field of psychology.
  • It should join with tech executives who are demanding that persuasive design in kids’ tech products be regulated
  • The APA also should make its powerful voice heard amongst the growing chorus calling out tech companies that intentionally exploit children’s vulnerabilities.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
sanderk

The Exaggerated Promise of So-Called Unbiased Data Mining | WIRED - 1 views

  • The Feynman trap—ransacking data for patterns without any preconceived idea of what one is looking for—is the Achilles heel of studies based on data mining. Finding something unusual or surprising after it has already occurred is neither unusual nor surprising. Patterns are sure to be found, and are likely to be misleading, absurd, or worse.
  • A standard neuroscience experiment involves showing a volunteer in an MRI machine various images and asking questions about the images. The measurements are noisy, picking up magnetic signals from the environment and from variations in the density of fatty tissue in different parts of the brain. Sometimes they miss brain activity; sometimes they suggest activity where there is none.A Dartmouth graduate student used an MRI machine to study the brain activity of a salmon as it was shown photographs and asked questions. The most interesting thing about the study was not that a salmon was studied, but that the salmon was dead. Yep, a dead salmon purchased at a local market was put into the MRI machine, and some patterns were discovered. There were inevitably patterns—and they were invariably meaningless.
  •  
    This article relates to our discussion in class about data mining. Scientists assume that patterns in data are true instead of making a hypothesis and trying to see if their hypothesis is true. These assumptions can lead to false conclusions. Also, this article talks about how people go through all of this data without knowing what they are looking for. When someone does this, it is called The Feynman Trap. I also found it interesting how someone studied the brain activity of a dead fish and still found patterns.
kushnerha

Facebook's Bias Is Built-In, and Bears Watching - The New York Times - 2 views

  • Facebook is the world’s most influential source of news.That’s true according to every available measure of size — the billion-plus people who devour its News Feed every day, the cargo ships of profit it keeps raking in, and the tsunami of online traffic it sends to other news sites.
  • But Facebook has also acquired a more subtle power to shape the wider news business. Across the industry, reporters, editors and media executives now look to Facebook the same way nesting baby chicks look to their engorged mother — as the source of all knowledge and nourishment, the model for how to behave in this scary new-media world. Case in point: The New York Times, among others, recently began an initiative to broadcast live video. Why do you suppose that might be? Yup, the F word. The deal includes payments from Facebook to news outlets, including The Times.
  • Yet few Americans think of Facebook as a powerful media organization, one that can alter events in the real world. When blowhards rant about the mainstream media, they do not usually mean Facebook, the mainstreamiest of all social networks. That’s because Facebook operates under a veneer of empiricism. Many people believe that what you see on Facebook represents some kind of data-mined objective truth unmolested by the subjective attitudes of fair-and-balanced human beings.
  • ...11 more annotations...
  • None of that is true. This week, Facebook rushed to deny a report in Gizmodo that said the team in charge of its “trending” news list routinely suppressed conservative points of view. Last month, Gizmodo also reported that Facebook employees asked Mark Zuckerberg, the social network’s chief executive, if the company had a responsibility to “help prevent President Trump in 2017.” Facebook denied it would ever try to manipulate elections.
  • Even if you believe that Facebook isn’t monkeying with the trending list or actively trying to swing the vote, the reports serve as timely reminders of the ever-increasing potential dangers of Facebook’s hold on the news.
  • The question isn’t whether Facebook has outsize power to shape the world — of course it does, and of course you should worry about that power. If it wanted to, Facebook could try to sway elections, favor certain policies, or just make you feel a certain way about the world, as it once proved it could do in an experiment devised to measure how emotions spread online.
  • There is no evidence Facebook is doing anything so alarming now. The danger is nevertheless real. The biggest worry is that Facebook doesn’t seem to recognize its own power, and doesn’t think of itself as a news organization with a well-developed sense of institutional ethics and responsibility, or even a potential for bias. Neither does its audience, which might believe that Facebook is immune to bias because it is run by computers.
  • That myth should die. It’s true that beyond the Trending box, most of the stories Facebook presents to you are selected by its algorithms, but those algorithms are as infused with bias as any other human editorial decision.
  • “With Facebook, humans are never not involved. Humans are in every step of the process — in terms of what we’re clicking on, who’s shifting the algorithms behind the scenes, what kind of user testing is being done, and the initial training data provided by humans.”Everything you see on Facebook is therefore the product of these people’s expertise and considered judgment, as well as their conscious and unconscious biases apart from possible malfeasance or potential corruption. It’s often hard to know which, because Facebook’s editorial sensibilities are secret. So are its personalities: Most of the engineers, designers and others who decide what people see on Facebook will remain forever unknown to its audience.
  • Facebook also has an unmistakable corporate ethos and point of view. The company is staffed mostly by wealthy coastal Americans who tend to support Democrats, and it is wholly controlled by a young billionaire who has expressed policy preferences that many people find objectionable.
  • You could argue that none of this is unusual. Many large media outlets are powerful, somewhat opaque, operated for profit, and controlled by wealthy people who aren’t shy about their policy agendas — Bloomberg News, The Washington Post, Fox News and The New York Times, to name a few.But there are some reasons to be even more wary of Facebook’s bias. One is institutional. Many mainstream outlets have a rigorous set of rules and norms about what’s acceptable and what’s not in the news business.
  • Those algorithms could have profound implications for society. For instance, one persistent worry about algorithmic-selected news is that it might reinforce people’s previously held points of view. If News Feed shows news that we’re each likely to Like, it could trap us into echo chambers and contribute to rising political polarization. In a study last year, Facebook’s scientists asserted the echo chamber effect was muted.
  • are Facebook’s engineering decisions subject to ethical review? Nobody knows.
  • The other reason to be wary of Facebook’s bias has to do with sheer size. Ms. Caplan notes that when studying bias in traditional media, scholars try to make comparisons across different news outlets. To determine if The Times is ignoring a certain story unfairly, look at competitors like The Washington Post and The Wall Street Journal. If those outlets are covering a story and The Times isn’t, there could be something amiss about the Times’s news judgment.Such comparative studies are nearly impossible for Facebook. Facebook is personalized, in that what you see on your News Feed is different from what I see on mine, so the only entity in a position to look for systemic bias across all of Facebook is Facebook itself. Even if you could determine the spread of stories across all of Facebook’s readers, what would you compare it to?
Javier E

Technology's Man Problem - NYTimes.com - 0 views

  • computer engineering, the most innovative sector of the economy, remains behind. Many women who want to be engineers encounter a field where they not only are significantly underrepresented but also feel pushed away.
  • Among the women who join the field, 56 percent leave by midcareer, a startling attrition rate that is double that for men, according to research from the Harvard Business School.
  • A culprit, many people in the field say, is a sexist, alpha-male culture that can make women and other people who don’t fit the mold feel unwelcome, demeaned or even endangered.
  • ...12 more annotations...
  • “I’ve been a programmer for 13 years, and I’ve always been one of the only women and queer people in the room. I’ve been harassed, I’ve had people make suggestive comments to me, I’ve had people basically dismiss my expertise. I’ve gotten rape and death threats just for speaking out about this stuff.”
  • “We see these stories, ‘Why aren’t there more women in computer science and engineering?’ and there’s all these complicated answers like, ‘School advisers don’t have them take math and physics,’ and it’s probably true,” said Lauren Weinstein, a man who has spent his four-decade career in tech working mostly with other men, and is currently a consultant for Google.“But I think there’s probably a simpler reason,” he said, “which is these guys are just jerks, and women know it.”
  • once programming gained prestige, women were pushed out. Over the decades, the share of women in computing has continued to decline. In 2012, just 18 percent of computer-science college graduates were women, down from 37 percent in 1985, according to the National Center for Women & Information Technology.
  • Some 1.2 million computing jobs will be available in 2022, yet United States universities are producing only 39 percent of the graduates needed to fill them, the N.C.W.I.T. estimates.
  • an engineer at Pinterest has collected data from people at 133 start-ups and found that an average of 12 percent of the engineers are women.
  • Twenty percent of software developers are women, according to the Labor Department, and fewer than 6 percent of engineers are black or Hispanic. Comparatively, 56 percent of people in business and financial-operations jobs are women, as are 36 percent of physicians and surgeons and one-third of lawyers.
  • “It makes a hostile environment for me,” she said. “But I don’t want to raise my hand and call negative attention toward myself, and become the woman who is the problem — ‘that woman.’ In start-up culture they protect their own tribe, so by putting my hand up, I’m saying I’m an ‘other,’ I shouldn’t be there, so for me that’s an economic threat.”
  • “Many women have come to me and said they basically have had to hide on the Net now,” said Mr. Weinstein, who works on issues of identity and anonymity online. “They use male names, they don’t put their real photos up, because they are immediately targeted and harassed.”
  • “It’s a boys’ club, and you have to try to get into it, and they’re trying as hard as they can to prove you can’t,” said Ephrat Bitton, the director of algorithms at FutureAdvisor, an online investment start-up that she says has a better culture because almost half the engineers are women.
  • Writing code is a high-pressure job with little room for error, as are many jobs. But coding can be stressful in a different way, women interviewed for this article said, because code reviews — peer reviews to spot mistakes in software — can quickly devolve.
  • “Code reviews are brutal — ‘Mine is better than yours, I see flaws in yours’ — and they should be, for the creation of good software,” said Ellen Ullman, a software engineer and author. “I think when you add a drop of women into it, it just exacerbates the problem, because here’s a kind of foreigner.”
  • But some women argue that these kinds of initiatives are unhelpful.“My general issue with the coverage of women in tech is that women in the technology press are talked about in the context of being women, and men are talked about in the context of being in technology,” said a technical woman who would speak only on condition of anonymity because she did not want to be part of an article about women in tech.
Javier E

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
oliviaodon

Climate Science Meets a Stubborn Obstacle: Students - The New York Times - 0 views

  • WELLSTON, Ohio — To Gwen Beatty, a junior at the high school in this proud, struggling, Trump-supporting town, the new science teacher’s lessons on climate change seemed explicitly designed to provoke her.So she provoked him back.When the teacher, James Sutter, ascribed the recent warming of the Earth to heat-trapping gases released by burning fossil fuels like the coal her father had once mined, she asserted that it could be a result of other, natural causes.When he described the flooding, droughts and fierce storms that scientists predict within the century if such carbon emissions are not sharply reduced, she challenged him to prove it. “Scientists are wrong all the time,” she said with a shrug, echoing those celebrating President Trump’s announcement last week that the United States would withdraw from the Paris climate accord.
  • She was, he knew, a straight-A student. She would have had no trouble comprehending the evidence, embedded in ancient tree rings, ice, leaves and shells, as well as sophisticated computer models, that atmospheric carbon dioxide is the chief culprit when it comes to warming the world.
  • When she insisted that teachers “are supposed to be open to opinions,” however, Mr. Sutter held his ground.“It’s not about opinions,” he told her. “It’s about the evidence.”
  • ...4 more annotations...
  • As more of the nation’s teachers seek to integrate climate science into the curriculum, many of them are reckoning with students for whom suspicion of the subject is deeply rooted.
  • rejecting the key findings of climate science can seem like a matter of loyalty to a way of life already under siege.
  • Originally tied, perhaps, to economic self-interest, climate skepticism has itself become a proxy for conservative ideals of hard work, small government and what people here call “self-sustainability.”
  • “What people ‘believe’ about global warming doesn’t reflect what they know,” Dan Kahan, a Yale researcher who studies political polarization, has stressed in talks, papers and blog posts. “It expresses who they are.”
  •  
    I thought this article was very interesting as it showed students' increasing suspicion of climate change. Something I found remarkable is that one student said that teachers should be open to opinions, but the environmental teacher said, 'It's not about opinions, it's about the evidence." The article also touched on the way economic self-interest led to a town's climate skepticism.
Javier E

The Scoreboards Where You Can't See Your Score - NYTimes.com - 0 views

  • The characters in Gary Shteyngart’s novel “Super Sad True Love Story” inhabit a continuously surveilled and scored society.
  • Consider the protagonist, Lenny Abramov, age 39. A digital dossier about him accumulates his every health condition (high cholesterol, depression), liability (mortgage: $560,330), purchase (“bound, printed, nonstreaming media artifact”), tendency (“heterosexual, nonathletic, nonautomotive, nonreligious”) and probability (“life span estimated at 83”). And that profile is available for perusal by employers, friends and even strangers in bars.
  • Even before the appearance of these books, a report called “The Scoring of America” by the World Privacy Forum showed how analytics companies now offer categorization services like “churn scores,” which aim to predict which customers are likely to forsake their mobile phone carrier or cable TV provider for another company; “job security scores,” which factor a person’s risk of unemployment into calculations of his or her ability to pay back a loan; “charitable donor scores,” which foundations use to identify the households likeliest to make large donations; and “frailty scores,” which are typically used to predict the risk of medical complications and death in elderly patients who have surgery.
  • ...12 more annotations...
  • In two nonfiction books, scheduled to be published in January, technology experts examine similar consumer-ranking techniques already in widespread use.
  • While a federal law called the Fair Credit Reporting Act requires consumer reporting agencies to provide individuals with copies of their credit reports on request, many other companies are free to keep their proprietary consumer scores to themselves.
  • Befitting the founder of a firm that markets reputation management, Mr. Fertik contends that individuals have some power to influence commercial scoring systems.
  • “This will happen whether or not you want to participate, and these scores will be used by others to make major decisions about your life, such as whether to hire, insure, or even date you,”
  • “Important corporate actors have unprecedented knowledge of the minutiae of our daily lives,” he writes in “The Black Box Society: The Secret Algorithms That Control Money and Information” (Harvard University Press), “while we know little to nothing about how they use this knowledge to influence important decisions that we — and they — make.”
  • Data brokers amass dossiers with thousands of details about individual consumers, like age, religion, ethnicity, profession, mortgage size, social networks, estimated income and health concerns such as impotence and irritable bowel syndrome. Then analytics engines can compare patterns in those variables against computer forecasting models. Algorithms are used to assign consumers scores — and to recommend offering, or withholding, particular products, services or fees — based on predictions about their behavior.
  • It’s a fictional forecast of a data-deterministic culture in which computer algorithms constantly analyze consumers’ profiles, issuing individuals numeric rankings that may benefit or hinder them.
  • Think of this technique as reputation engine optimization. If an algorithm incorrectly pegs you as physically unfit, for instance, the book suggests that you can try to mitigate the wrong. You can buy a Fitbit fitness tracker, for instance, and upload the exercise data to a public profile — or even “snap that Fitbit to your dog” and “you’ll quickly be the fittest person in your town.”
  • Professor Pasquale offers a more downbeat reading. Companies, he says, are using such a wide variety of numerical rating systems that it would be impossible for average people to significantly influence their scores.
  • “Corporations depend on automated judgments that may be wrong, biased or destructive,” Professor Pasquale writes. “Faulty data, invalid assumptions and defective models can’t be corrected when they are hidden.”
  • Moreover, trying to influence scoring systems could backfire. If a person attached a fitness device to a dog and tried to claim the resulting exercise log, he suggests, an algorithm might be able to tell the difference and issue that person a high score for propensity toward fraudulent activity.
  • “People shouldn’t think they can outwit corporations with hundreds of millions of dollars,” Professor Pasquale said in a phone interview.Consumers would have more control, he argues, if Congress extended the right to see and correct credit reports to other kinds of rankings.
kushnerha

American 'space pioneers' deserve asteroid rights, Congress says | Science | The Guardian - 0 views

  • In a rare bipartisan moment US lawmakers opened up the possibility of mining on other worlds despite an international treaty barring sovereign claims in space
  • The US Senate passed the Space Act of 2015 this week, sending its revisions of the bill back to the House for an expected approval, after which it would land on the president’s desk. The bill has a slew of provisions to encourage commercial companies that want to explore space and exploit its resources, granting “asteroid resource” and “space resource” rights to US citizens who managed to acquire the resource themselves.
  • lawmakers defined “space resource” as “an abiotic resource in situ in outer space” that would include water and minerals but not life.
  • ...8 more annotations...
  • The company’s president, Chris Lewicki, compared the bill to the Homestead Act, which distributed public land to Americans heading west and helped reshape the United States. “The Homestead Act of 1862 advocated for the search for gold and timber, and today, HR 2262 fuels a new economy,” Lewicki said in a statement. “This off-planet economy will forever change our lives for the better here on Earth.”
  • obstacle to space mining is an 1967 international treaty known as the Outer Space Treaty, to which the US is a signatory. The treaty holds that no “celestial body” is subject to “national appropriation by claim of sovereignty, by means of use or occupation, or by any other means”.
  • careful to add in their bill that they grant rights only to citizens who act under the law, “including the international obligations of the United States”.
  • added a “disclaimer of extraterritorial sovereignty”, saying the US does not thereby assert ownership, exclusive rights or jurisdiction “of any celestial body”.
  • bill asserts certain rights for US citizens, it disavows any national claim – sending a mixed message on asteroid rights
  • “They’re trying to dance around the issue. I tend to think it doesn’t create any rights because it conflicts with international law. The bottom line is before you can give somebody the right to harvest a resource you have to have ownership.”
  • Asteroids vary in their makeup, but some are rich in platinum and other valuable metals. Nasa has run missions to explore the possibilities of mining asteroids
  • solidifies America’s leading role in the commercial space sector
mcginnisca

Why Sexism at the Office Makes Women Love Hillary Clinton - The New York Times - 0 views

  • Younger Democratic women are mostly for Bernie Sanders; older women lean more toward Hillary Clinton.
  • The idealistic but ungrateful naïfs who think sexism is a thing of the past and believe, as Mr. Sanders recently said, that “people should not be voting for candidates based on their gender” are seemingly battling the pantsuited old scolds prattling on about feminism
  • More time in a sexist world, and particularly in the workplace, radicalizes women.
  • ...15 more annotations...
  • It’s not that young women aren’t feminists, or don’t care about sexism. For college-age women — Mr. Sanders’s female base — sexism tends to be linked to sex.
  • Young women are neither ungrateful to their feminist foremothers nor complacent; rather, they are activists for feminist causes that reflect their needs.
  • College-educated women see only a tiny pay gap in their early- and mid-20s, making 97 cents for every dollar earned by their male colleagues.
  • That experience starts to change a few more years into the work force. By 35, those same college-educated women are making 15 percent less than their male peers. Women’s earnings peak between ages 35 and 44 and then plateau, while men’s continue to rise.
  • When women have children, they’re penalized: They’re considered less competent, they’re less likely to be hired for a new job and they’re paid less
  • one of the few female partners always seemed to be in charge of ordering lunch
  • For the many women who live at the center of that time crush, Mrs. Clinton’s emphasis on the wage gap, paid family leave and universal prekindergarten may be particularly appealing. Mr. Sanders, who also supports paid leave and universal pre-K, takes a different rhetorical tone, usually stressing affordable higher education and universal health care.
  • I watched as men with little or irrelevant experience were hired and promoted, because they had such great ideas, or they fit in better. “We want a woman,” the conclusion seemed to be, “just not this woman.”
  • in the now-common refrain about Hillary Clinton: “I want a woman president, just not this woman president.”
  • a 19-year-old aspiring lawyer who is volunteering for Mr. Sanders today will work for firms with more female partners and live in a world where the wage gap has shrunk. But the trends show that her experience in a decade is unlikely to be that different from mine.
  • Many more women over 25 are in the work force than those under, and women over 25 also do about twice as much unpaid domestic work as their younger counterparts.
  • I listened as some of my male colleagues opined on the need to marry a woman who would stay home with the children — that wasn’t sexist, they insisted, because it wasn’t that they thought only women should stay home; it was just that somebody had to, and the years in which they planned on having children would be crucial ones for their own careers.
  • Child care is just as expensive in many places as sending a kid to public university, but a college kid can get a part-time job. A toddler can’t.”
  • There are many other reasons women in the 30-and-over cohort may lean toward Mrs. Clinton. They’ve already seen promises of revolutionary change fall short. They may prefer a candidate with a progressive ideology but a more restrained, and potentially more effective, strategy for putting that ideology in place.
  • If it’s not this woman, this year, then who and when?
Javier E

Campaigns Mine Personal Lives to Get Out Vote - NYTimes.com - 0 views

  • Strategists affiliated with the Obama and Romney campaigns say they have access to information about the personal lives of voters at a scale never before imagined. And they are using that data to try to influence voting habits — in effect, to train voters to go to the polls through subtle cues, rewards and threats in a manner akin to the marketing efforts of credit card companies and big-box retailers.
  • In the weeks before Election Day, millions of voters will hear from callers with surprisingly detailed knowledge of their lives. These callers — friends of friends or long-lost work colleagues — will identify themselves as volunteers for the campaigns or independent political groups. The callers will be guided by scripts and call lists compiled by people — or computers — with access to details like whether voters may have visited pornography Web sites, have homes in foreclosure, are more prone to drink Michelob Ultra than Corona or have gay friends or enjoy expensive vacations.
  • “You don’t want your analytical efforts to be obvious because voters get creeped out,” said a Romney campaign official who was not authorized to speak to a reporter. “A lot of what we’re doing is behind the scenes.”
  • ...4 more annotations...
  • however, consultants to both campaigns said they had bought demographic data from companies that study details like voters’ shopping histories, gambling tendencies, interest in get-rich-quick schemes, dating preferences and financial problems. The campaigns themselves, according to campaign employees, have examined voters’ online exchanges and social networks to see what they care about and whom they know. They have also authorized tests to see if, say, a phone call from a distant cousin or a new friend would be more likely to prompt the urge to cast a ballot.
  • The campaigns have planted software known as cookies on voters’ computers to see if they frequent evangelical or erotic Web sites for clues to their moral perspectives. Voters who visit religious Web sites might be greeted with religion-friendly messages when they return to mittromney.com or barackobama.com. The campaigns’ consultants have run experiments to determine if embarrassing someone for not voting by sending letters to their neighbors or posting their voting histories online is effective.
  • “I’ve had half-a-dozen conversations with third parties who are wondering if this is the year to start shaming,” said one consultant who works closely with Democratic organizations. “Obama can’t do it. But the ‘super PACs’ are anonymous. They don’t have to put anything on the flier to let the voter know who to blame.”
  • Officials at both campaigns say the most insightful data remains the basics: a voter’s party affiliation, voting history, basic information like age and race, and preferences gleaned from one-on-one conversations with volunteers. But more subtle data mining has helped the Obama campaign learn that their supporters often eat at Red Lobster, shop at Burlington Coat Factory and listen to smooth jazz. Romney backers are more likely to drink Samuel Adams beer, eat at Olive Garden and watch college football.
Javier E

Just Don't Call It Privacy - The New York Times - 2 views

  • In a surveillance economy where companies track, analyze and capitalize on our clicks, the issue at hand isn’t privacy. The problem is unfettered data exploitation and its potential deleterious consequences — among them, unequal consumer treatment, financial fraud, identity theft, manipulative marketing and discrimination.
  • In other words, asking companies whose business models revolve around exploiting data-based consumer-influence techniques to explain their privacy policies seems about as useful as asking sharks to hold forth on veganism.
  • They should be examining business practices. They should be examining how these firms collect and use the personal data of customers, of internet users.”
  • ...7 more annotations...
  • Companies are sending their “policy and law folks to Washington to make the government go away — not the engineering folks who actually understand these systems in depth and can talk through alternatives,” Jonathan Mayer, an assistant professor of computer science and public affairs at Princeton University, told me.
  • revelations about Russian election interference and Cambridge Analytica, the voter-profiling company that obtained information on millions of Facebook users, have made it clear that data-driven influence campaigns can scale quickly and cause societal harm.
  • Do we want a future in which companies can freely parse the photos we posted last year, or the location data from the fitness apps we used last week, to infer whether we are stressed or depressed or financially strapped or emotionally vulnerable — and take advantage of that?
  • AT&T’s privacy policy says the mobile phone and cable TV provider may use third-party data to categorize subscribers, without using their real names, into interest segments and show them ads accordingly. That sounds reasonable enough
  • AT&T can find out which subscribers have indigestion — or at least which ones bought over-the-counter drugs to treat it.
  • In a case study for advertisers, AT&T describes segmenting DirecTV subscribers who bought antacids and then targeting them with ads for the medication. The firm was also able to track those subscribers’ spending. Households who saw the antacid ads spent 725 percent more on the drugs than a national audience.
  • Michael Balmoris, a spokesman for AT&T, said the company’s privacy policy was “transparent and precise, and describes in plain language how we use information and the choices we give customers.”
Javier E

Opinion | Two visions of 'normal' collided in our abnormal pandemic year - The Washingt... - 0 views

  • The date was Sept. 17, 2001. The rubble was still smoking. As silly as this sounds, I was hoping it would make me cry.
  • That didn’t happen. The truth is, it still looked like something on television, a surreal shot from a disaster movie. I was stunned but unmoved.
  • ADLater, trying to understand the difference between those two moments, I told people, “The rubble still didn’t feel real.”
  • ...11 more annotations...
  • now, after a year of pandemic, I realize that wasn’t the problem. The rubble was real, all right. It just wasn’t normal.
  • it always, somehow, came back to that essential human craving for things to be normal, and our inability to believe that they are not, even when presented with compelling evidence.
  • This phenomenon is well-known to cognitive scientists, who have dubbed it “normalcy bias.”
  • the greater risk is more often the opposite: People can’t quite believe. They ignore the fire alarm, defy the order to evacuate ahead of the hurricane, or pause to grab their luggage when exiting the crashed plane. Too often, they die.
  • Calling the quest for normalcy a bias makes it sound bad, but most of the time this tendency is a good thing. The world is full of aberrations, most of them meaningless. If we aimed for maximal reaction to every anomaly we encountered, we’d break down from sheer nervous exhaustion.
  • But when things go disastrously wrong, our optimal response is at war with the part of our brain that insists things are fine. We try to reoccupy the old normal even if it’s become radioactive and salted with mines. We still resist the new normal — even when it’s staring us in the face.
  • Nine months into our current disaster, I now see that our bitter divides over pandemic response were most fundamentally a contest between two ideas of what it meant to get “back to normal.”
  • One group wanted to feel as safe as they had before a virus invaded our shores; the other wanted to feel as unfettered
  • he disputes that followed weren’t just a fight to determine whose idea of normal would prevail. They were a battle against an unthinkable reality, which was that neither kind of normalcy was fully possible anymore.
  • I suspect we all might have been less willing to make war on our opponents if only we’d believed that we were fighting people not very different from how we were — exhausted by the whole thing and frantic to feel like themselves again
  • Some catastrophes are simply too big to be understood except in the smallest way, through their most ordinary human details
peterconnelly

German Chancellor accused of comparing climate activists to Nazis - CNN - 0 views

  • German Chancellor Olaf Scholz was accused Monday of comparing climate activists to Nazis, in allegations that his spokesperson said were "completely absurd."
  • "I'll be honest: These black-clad displays at various events by the same people over and over again remind me of a time that is, thank God, long gone by," he said in an exchange captured on camera.
  • Scholz was speaking about the phase-out of coal-fired power generation and resulting jobs losses in open cast mining when he was interrupted.
  • ...4 more annotations...
  • Prominent German Climate scientist Friederike Otto commented that "Scholz 'forgets' our worst history, dismisses every generation that comes after him as irrelevant & the audience just applauds."
  • "Where does one begin? In just one half-sentence, the Chancellor of the Federal Republic relativizes the Nazi regime and, in a paradoxical way, also the climate crisis," she wrote on Twitter. "He stylizes climate protection as an ideology with parallels to the Nazi regime. In 2022. Jesus. This is such a scandal."
  • "I have also been to events where five people sat dressed in the same way, each had a well-rehearsed stance, and then they do it again every time," he said. "And that's why I think that is not a discussion, that is not participation in a discussion, but an attempt to manipulate events for one's own purposes. One should not do this."
  • The chancellor leads a three-party coalition with partners the Greens and pro-business Free Democrats, and their pledge to improve climate change action was central to their campaign.
Javier E

Reed Graduate: Hum 110 Encourages Challenging the Past - The Atlantic - 0 views

  • a worthwhile—and necessary—discussion about whether anyone should read the ancient Greeks in the first place, and if so why and how.
  • As someone who took Hum 110 more than 20 years ago, the news from campus has made me reflect on what I learned in the course. The answers, both equally true, are that I didn’t learn very much—and that I learned everything.
  • I slogged through the Hum 110 readings and wrote the required papers, but I can’t say that the words of Herodotus, Sappho, or Homer really sank in.
  • ...9 more annotations...
  • And yet they did. What I learned in Hum 110 is that so-called Western civilization is a narrative much like any other—except that it happens to affect just about everyone on earth. No matter where we were born or what we look like or what we believe, the narrative of Western civilization is part of the cultural water we swim in. By taking me back to the origins of that narrative, Hum 110 did me the great favor of hauling it into view—and impressing me with the universal right and duty to question it.
  • Members of Reedies Against Racism argue that Hum 110 simply perpetuates the most familiar version of the Western-civilization narrative—that positioning Plato and Aristotle at the very center of the college curriculum helps ensure their continued influence, along with the continued silencing of other voices from the past. This is worth considering
  • To many of the campus protesters, then, the Hum 110 syllabus looks like a monument overdue for toppling; online discussions have even compared it to the Confederate flag.
  • I respect the beauty and boldness and skill displayed in all these texts, and I respect the expertise of those who devote their lives to studying them. But I learned in Hum 110 that to respect a text is to keep experimenting with it, and to keep testing its relevance. Some of these works have already survived thousands of years of scrutiny; let’s see if they can take a few millennia more.
  • In my experience, the Hum 110 syllabus wasn’t a tool of exclusion but a route to inclusion.
  • The syllabus is more diverse, as is the faculty that teaches it and the student body that reads it. This is not to say that the syllabus is perfect—far from it. It’s to say that it isn’t carved in stone—and unlike a monument or a flag, it’s not meant to teach reverence. In fact, Hum 110 is intended to teach precisely the opposite.
  • Not long ago, a friend of mine told me about some classical theater she’d seen. “I enjoyed it,” she said, “but I don’t feel like it’s really mine.” She wasn’t talking about representation, about whether someone who looked or acted like her had appeared on stage. She just felt that despite her smarts, and her multiple graduate degrees, she wasn’t familiar enough with the work to engage with it.
  • But by exposing the roots of the narrative known as Western civilization, Hum 110 opened a door to me that’s never closed.
  • What I really learned in Hum 110 is that the ancient Greeks—and the rest of our collective cultural ancestries, for that matter—are mine. They’re mine and yours and theirs and ours, to honor with our sharpest spears.
Duncan H

Facebook Is Using You - NYTimes.com - 0 views

  • Facebook’s inventory consists of personal data — yours and mine.
  • Facebook makes money by selling ad space to companies that want to reach us. Advertisers choose key words or details — like relationship status, location, activities, favorite books and employment — and then Facebook runs the ads for the targeted subset of its 845 million users
  • The magnitude of online information Facebook has available about each of us for targeted marketing is stunning. In Europe, laws give people the right to know what data companies have about them, but that is not the case in the United States.
  • ...8 more annotations...
  • The bits and bytes about your life can easily be used against you. Whether you can obtain a job, credit or insurance can be based on your digital doppelgänger — and you may never know why you’ve been turned down.
  • Stereotyping is alive and well in data aggregation. Your application for credit could be declined not on the basis of your own finances or credit history, but on the basis of aggregate data — what other people whose likes and dislikes are similar to yours have done
  • Data aggregators’ practices conflict with what people say they want. A 2008 Consumer Reports poll of 2,000 people found that 93 percent thought Internet companies should always ask for permission before using personal information, and 72 percent wanted the right to opt out of online tracking. A study by Princeton Survey Research Associates in 2009 using a random sample of 1,000 people found that 69 percent thought that the United States should adopt a law giving people the right to learn everything a Web site knows about them. We need a do-not-track law, similar to the do-not-call one. Now it’s not just about whether my dinner will be interrupted by a telemarketer. It’s about whether my dreams will be dashed by the collection of bits and bytes over which I have no control and for which companies are currently unaccountable.
  • The term Weblining describes the practice of denying people opportunities based on their digital selves. You might be refused health insurance based on a Google search you did about a medical condition. You might be shown a credit card with a lower credit limit, not because of your credit history, but because of your race, sex or ZIP code or the types of Web sites you visit.
  • Advertisers are drawing new redlines, limiting people to the roles society expects them to play
  • Even though laws allow people to challenge false information in credit reports, there are no laws that require data aggregators to reveal what they know about you. If I’ve Googled “diabetes” for a friend or “date rape drugs” for a mystery I’m writing, data aggregators assume those searches reflect my own health and proclivities. Because no laws regulate what types of data these aggregators can collect, they make their own rules.
  • LAST week, Facebook filed documents with the government that will allow it to sell shares of stock to the public. It is estimated to be worth at least $75 billion. But unlike other big-ticket corporations, it doesn’t have an inventory of widgets or gadgets, cars or phones.
  • If you indicate that you like cupcakes, live in a certain neighborhood and have invited friends over, expect an ad from a nearby bakery to appear on your page.
Javier E

How to Remember Everything You Want From Non-Fiction Books | by Eva Keiffenheim, MSc | ... - 0 views

  • A Bachelor’s degree taught me how to learn to ace exams. But it didn’t teach me how to learn to remember.
  • 65% to 80% of students answered “no” to the question “Do you study the way you do because somebody taught you to study that way?”
  • the most-popular Coursera course of all time: Dr. Barabara Oakley’s free course on “Learning how to Learn.” So did I. And while this course taught me about chunking, recalling, and interleaving
  • ...66 more annotations...
  • I learned something more useful: the existence of non-fiction literature that can teach you anything.
  • something felt odd. Whenever a conversation revolved around a serious non-fiction book I read, such as ‘Sapiens’ or ‘Thinking Fast and Slow,’ I could never remember much. Turns out, I hadn’t absorbed as much information as I’d believed. Since I couldn’t remember much, I felt as though reading wasn’t an investment in knowledge but mere entertainment.
  • When I opened up about my struggles, many others confessed they also can’t remember most of what they read, as if forgetting is a character flaw. But it isn’t.
  • It’s the way we work with books that’s flawed.
  • there’s a better way to read. Most people rely on techniques like highlighting, rereading, or, worst of all, completely passive reading, which are highly ineffective.
  • Since I started applying evidence-based learning strategies to reading non-fiction books, many things have changed. I can explain complex ideas during dinner conversations. I can recall interesting concepts and link them in my writing or podcasts. As a result, people come to me for all kinds of advice.
  • What’s the Architecture of Human Learning and Memory?
  • Human brains don’t work like recording devices. We don’t absorb information and knowledge by reading sentences.
  • we store new information in terms of its meaning to our existing memory
  • we give new information meaning by actively participating in the learning process — we interpret, connect, interrelate, or elaborate
  • To remember new information, we not only need to know it but also to know how it relates to what we already know.
  • Learning is dependent on memory processes because previously-stored knowledge functions as a framework in which newly learned information can be linked.”
  • Human memory works in three stages: acquisition, retention, and retrieval. In the acquisition phase, we link new information to existing knowledge; in the retention phase, we store it, and in the retrieval phase, we get information out of our memory.
  • Retrieval, the third stage, is cue dependent. This means the more mental links you’re generating during stage one, the acquisition phase, the easier you can access and use your knowledge.
  • we need to understand that the three phases interrelate
  • creating durable and flexible access to to-be-learned information is partly a matter of achieving a meaningful encoding of that information and partly a matter of exercising the retrieval process.”
  • Next, we’ll look at the learning strategies that work best for our brains (elaboration, retrieval, spaced repetition, interleaving, self-testing) and see how we can apply those insights to reading non-fiction books.
  • The strategies that follow are rooted in research from professors of Psychological & Brain Science around Henry Roediger and Mark McDaniel. Both scientists spent ten years bridging the gap between cognitive psychology and education fields. Harvard University Press published their findings in the book ‘Make It Stick.
  • #1 Elaboration
  • “Elaboration is the process of giving new material meaning by expressing it in your own words and connecting it with what you already know.”
  • Why elaboration works: Elaborative rehearsal encodes information into your long-term memory more effectively. The more details and the stronger you connect new knowledge to what you already know, the better because you’ll be generating more cues. And the more cues they have, the easier you can retrieve your knowledge.
  • How I apply elaboration: Whenever I read an interesting section, I pause and ask myself about the real-life connection and potential application. The process is invisible, and my inner monologues sound like: “This idea reminds me of…, This insight conflicts with…, I don’t really understand how…, ” etc.
  • For example, when I learned about A/B testing in ‘The Lean Startup,’ I thought about applying this method to my startup. I added a note on the site stating we should try it in user testing next Wednesday. Thereby the book had an immediate application benefit to my life, and I will always remember how the methodology works.
  • How you can apply elaboration: Elaborate while you read by asking yourself meta-learning questions like “How does this relate to my life? In which situation will I make use of this knowledge? How does it relate to other insights I have on the topic?”
  • While pausing and asking yourself these questions, you’re generating important memory cues. If you take some notes, don’t transcribe the author’s words but try to summarize, synthesize, and analyze.
  • #2 Retrieval
  • With retrieval, you try to recall something you’ve learned in the past from your memory. While retrieval practice can take many forms — take a test, write an essay, do a multiple-choice test, practice with flashcards
  • the authors of ‘Make It Stick’ state: “While any kind of retrieval practice generally benefits learning, the implication seems to be that where more cognitive effort is required for retrieval, greater retention results.”
  • Whatever you settle for, be careful not to copy/paste the words from the author. If you don’t do the brain work yourself, you’ll skip the learning benefits of retrieval
  • Retrieval strengthens your memory and interrupts forgetting and, as other researchers replicate, as a learning event, the act of retrieving information is considerably more potent than is an additional study opportunity, particularly in terms of facilitating long-term recall.
  • How I apply retrieval: I retrieve a book’s content from my memory by writing a book summary for every book I want to remember. I ask myself questions like: “How would you summarize the book in three sentences? Which concepts do you want to keep in mind or apply? How does the book relate to what you already know?”
  • I then publish my summaries on Goodreads or write an article about my favorite insights
  • How you can apply retrieval: You can come up with your own questions or use mine. If you don’t want to publish your summaries in public, you can write a summary into your journal, start a book club, create a private blog, or initiate a WhatsApp group for sharing book summaries.
  • a few days after we learn something, forgetting sets in
  • #3 Spaced Repetition
  • With spaced repetition, you repeat the same piece of information across increasing intervals.
  • The harder it feels to recall the information, the stronger the learning effect. “Spaced practice, which allows some forgetting to occur between sessions, strengthens both the learning and the cues and routes for fast retrieval,”
  • Why it works: It might sound counterintuitive, but forgetting is essential for learning. Spacing out practice might feel less productive than rereading a text because you’ll realize what you forgot. Your brain has to work harder to retrieve your knowledge, which is a good indicator of effective learning.
  • How I apply spaced repetition: After some weeks, I revisit a book and look at the summary questions (see #2). I try to come up with my answer before I look up my actual summary. I can often only remember a fraction of what I wrote and have to look at the rest.
  • “Knowledge trapped in books neatly stacked is meaningless and powerless until applied for the betterment of life.”
  • How you can apply spaced repetition: You can revisit your book summary medium of choice and test yourself on what you remember. What were your action points from the book? Have you applied them? If not, what hindered you?
  • By testing yourself in varying intervals on your book summaries, you’ll strengthen both learning and cues for fast retrieval.
  • Why interleaving works: Alternate working on different problems feels more difficult as it, again, facilitates forgetting.
  • How I apply interleaving: I read different books at the same time.
  • 1) Highlight everything you want to remember
  • #5 Self-Testing
  • While reading often falsely tricks us into perceived mastery, testing shows us whether we truly mastered the subject at hand. Self-testing helps you identify knowledge gaps and brings weak areas to the light
  • “It’s better to solve a problem than to memorize a solution.”
  • Why it works: Self-testing helps you overcome the illusion of knowledge. “One of the best habits a learner can instill in herself is regular self-quizzing to recalibrate her understanding of what she does and does not know.”
  • How I apply self-testing: I explain the key lessons from non-fiction books I want to remember to others. Thereby, I test whether I really got the concept. Often, I didn’t
  • instead of feeling frustrated, cognitive science made me realize that identifying knowledge gaps are a desirable and necessary effect for long-term remembering.
  • How you can apply self-testing: Teaching your lessons learned from a non-fiction book is a great way to test yourself. Before you explain a topic to somebody, you have to combine several mental tasks: filter relevant information, organize this information, and articulate it using your own vocabulary.
  • Now that I discovered how to use my Kindle as a learning device, I wouldn’t trade it for a paper book anymore. Here are the four steps it takes to enrich your e-reading experience
  • How you can apply interleaving: Your brain can handle reading different books simultaneously, and it’s effective to do so. You can start a new book before you finish the one you’re reading. Starting again into a topic you partly forgot feels difficult first, but as you know by now, that’s the effect you want to achieve.
  • it won’t surprise you that researchers proved highlighting to be ineffective. It’s passive and doesn’t create memory cues.
  • 2) Cut down your highlights in your browser
  • After you finished reading the book, you want to reduce your highlights to the essential part. Visit your Kindle Notes page to find a list of all your highlights. Using your desktop browser is faster and more convenient than editing your highlights on your e-reading device.
  • Now, browse through your highlights, delete what you no longer need, and add notes to the ones you really like. By adding notes to the highlights, you’ll connect the new information to your existing knowledge
  • 3) Use software to practice spaced repetitionThis part is the main reason for e-books beating printed books. While you can do all of the above with a little extra time on your physical books, there’s no way to systemize your repetition praxis.
  • Readwise is the best software to combine spaced repetition with your e-books. It’s an online service that connects to your Kindle account and imports all your Kindle highlights. Then, it creates flashcards of your highlights and allows you to export your highlights to your favorite note-taking app.
  • Common Learning Myths DebunkedWhile reading and studying evidence-based learning techniques I also came across some things I wrongly believed to be true.
  • #2 Effective learning should feel easyWe think learning works best when it feels productive. That’s why we continue to use ineffective techniques like rereading or highlighting. But learning works best when it feels hard, or as the authors of ‘Make It Stick’ write: “Learning that’s easy is like writing in sand, here today and gone tomorrow.”
  • In Conclusion
  • I developed and adjusted these strategies over two years, and they’re still a work in progress.
  • Try all of them but don’t force yourself through anything that doesn’t feel right for you. I encourage you to do your own research, add further techniques, and skip what doesn’t serve you
  • “In the case of good books, the point is not to see how many of them you can get through, but rather how many can get through to you.”— Mortimer J. Adler
Javier E

The Older Mind May Just Be a Fuller Mind - NYTimes.com - 0 views

  • Memory’s speed and accuracy begin to slip around age 25 and keep on slipping.
  • Now comes a new kind of challenge to the evidence of a cognitive decline, from a decidedly digital quarter: data mining, based on theories of information processing
  • Since educated older people generally know more words than younger people, simply by virtue of having been around longer, the experiment simulates what an older brain has to do to retrieve a word. And when the researchers incorporated that difference into the models, the aging “deficits” largely disappeared.
  • ...6 more annotations...
  • Neuroscientists have some reason to believe that neural processing speed, like many reflexes, slows over the years; anatomical studies suggest that the brain also undergoes subtle structural changes that could affect memory.
  • doubts about the average extent of the decline are rooted not in individual differences but in study methodology. Many studies comparing older and younger people, for instance, did not take into account the effects of pre-symptomatic Alzheimer’s disease,
  • The new data-mining analysis also raises questions about many of the measures scientists use. Dr. Ramscar and his colleagues applied leading learning models to an estimated pool of words and phrases that an educated 70-year-old would have seen, and another pool suitable for an educated 20-year-old. Their model accounted for more than 75 percent of the difference in scores between older and younger adults on items in a paired-associate test
  • That is to say, the larger the library you have in your head, the longer it usually takes to find a particular word (or pair).
  • Scientists who study thinking and memory often make a broad distinction between “fluid” and “crystallized” intelligence. The former includes short-term memory, like holding a phone number in mind, analytical reasoning, and the ability to tune out distractions, like ambient conversation. The latter is accumulated knowledge, vocabulary and expertise.
  • an increase in crystallized intelligence can account for a decrease in fluid intelligence,
1 - 20 of 39 Next ›
Showing 20 items per page