Skip to main content

Home/ TOK Friends/ Group items tagged Google

Rss Feed Group items tagged

Javier E

The Upside of Being Ruled by the Five Tech Giants - The New York Times - 0 views

  • ever since I started writing about what I call the Frightful Five, some have said my very premise is off base. I have argued that the companies’ size and influence pose a danger. But another argument suggests the opposite — that it’s better to be ruled by a handful of responsive companies capable of bowing to political and legal pressure. In other words, wouldn’t you rather deal with five horse-size Zucks than 100 duck-size technoforces?
  • Given all the ways that tech can go wrong — as we are seeing in the Russia influence scandal — isn’t it better that we can blame, and demand fixes from, a handful of American executives when things do go haywire?
  • One benefit of having five giant companies in charge of today’s tech infrastructure is that they provide a convenient focus for addressing those problems.
  • ...6 more annotations...
  • This does not mean they will succeed in fixing every problem their tech creates — and in some cases their fixes may well raise other problems, like questions about their power over freedom of expression. But at least they can try to address the wide variety of externalities posed by tech, which may have been impossible for an internet more fragmented by smaller firms.
  • Rob Atkinson, president of the Information Technology and Innovation Foundation, a think tank, and co-author of “Big Is Beautiful,” a coming book that extols the social and economic virtues of big companies.
  • “As long as their innovation rents are recycled into research and development that leads to new products, then what’s to complain about?”
  • At the same time, they are all locked in intense battles for new markets and technologies. And not only do they keep creating new tech, but they are coming at it in diverse ways — with different business models, different philosophies and different sets of ethics.
  • it was perhaps inevitable that we would see the rise of a handful of large companies take control of much of the modern tech business.
  • But it wasn’t inevitable that these companies would be based in and controlled from the United States. And it’s not obvious that will remain the case — the top tech companies of tomorrow might easily be Chinese, or Indian or Russian or European. But for now, that means we are dealing with companies that feel constrained by American laws and values.
Javier E

Uber, Arizona, and the Limits of Self-Driving Cars - The Atlantic - 0 views

  • it’s a good time for a critical review of the technical literature of self-driving cars. This literature reveals that autonomous vehicles don’t work as well as their creators might like the public to believe.
  • The world is a 3-D grid with x, y, and z coordinates. The car moves through the grid from point A to point B, using highly precise GPS measurements gathered from nearby satellites. Several other systems operate at the same time. The car’s sensors bounce out laser radar waves and measure the response time to build a “picture” of what is outside.
  • It is a masterfully designed, intricate computational system. However, there are dangers.
  • ...11 more annotations...
  • Self-driving cars navigate by GPS. What happens if a self-driving school bus is speeding down the highway and loses its navigation system at 75 mph because of a jammer in the next lane?
  • Because they are not calculating the trajectory for the stationary fire truck, only for objects in motion (like pedestrians or bicyclists), they can’t react quickly to register a previously stationary object as an object in motion.
  • If the car was programmed to save the car’s occupants at the expense of pedestrians, the autonomous-car industry is facing its first public moment of moral reckoning.
  • This kind of blind optimism about technology, the assumption that tech is always the right answer, is a kind of bias that I call technochauvinism.
  • With driving, the stakes are much higher. In a self-driving car, death is an unavoidable feature, not a bug.
  • By this point, many people know about the trolley problem as an example of an ethical decision that has to be programmed into a self-driving car.
  • an overwhelming number of tech people (and investors) seem to want self-driving cars so badly that they are willing to ignore evidence suggesting that self-driving cars could cause as much harm as good
  • t imagine the opposite scenario: The car is programmed to sacrifice the driver and the occupants to preserve the lives of bystanders. Would you get into that car with your child? Would you let anyone in your family ride in it? Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver?
  • Plenty of people want self-driving cars to make their lives easier, but self-driving cars aren’t the only way to fix America’s traffic problems. One straightforward solution would be to invest more in public transportation.
  • Public-transportation funding is a complex issue that requires massive, collaborative effort over a period of years. It involves government bureaucracy. This is exactly the kind of project that tech people often avoid attacking, because it takes a really long time and the fixes are complicated.
  • Plenty of people, including technologists, are sounding warnings about self-driving cars and how they attempt to tackle very hard problems that haven’t yet been solved. People are warning of a likely future for self-driving cars that is neither safe nor ethical nor toward the greater good. Still,  the idea that self-driving cars are nifty and coming soon is often the accepted wisdom, and there’s a tendency to forget that technologists have been saying “coming soon” for decades now.
Javier E

Lawmakers hope to use Facebook's 'oil spill' privacy mishap to usher in sweeping new la... - 0 views

  • Zuckerberg’s early, broad support for greater regulation in the wake of the Cambridge Analytica controversy contrasts with the years of lobbying by Facebook and its tech peers in D.C. to stave off any new rules that would restrict the information they collect — the lifeblood of their business models.
  • And Facebook joined with Google, Comcast and others this year in fighting a ballot measure in California that would allow consumers to opt out from having their information shared with advertisers while opening the companies to new lawsuits if they suffer data breaches. Facebook so far has spent about $200,000 to try to defeat the idea, state ethics records reflect.
Javier E

It's True: False News Spreads Faster and Wider. And Humans Are to Blame. - The New York... - 0 views

  • What if the scourge of false news on the internet is not the result of Russian operatives or partisan zealots or computer-controlled bots? What if the main problem is us?
  • People are the principal culprits
  • people, the study’s authors also say, prefer false news.
  • ...18 more annotations...
  • As a result, false news travels faster, farther and deeper through the social network than true news.
  • those patterns applied to every subject they studied, not only politics and urban legends, but also business, science and technology.
  • The stories were classified as true or false, using information from six independent fact-checking organizations including Snopes, PolitiFact and FactCheck.org
  • with or without the bots, the results were essentially the same.
  • “It’s not really the robots that are to blame.”
  • “News” and “stories” were defined broadly — as claims of fact — regardless of the source. And the study explicitly avoided the term “fake news,” which, the authors write, has become “irredeemably polarized in our current political and media climate.”
  • False claims were 70 percent more likely than the truth to be shared on Twitter. True stories were rarely retweeted by more than 1,000 people, but the top 1 percent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.
  • the researchers enlisted students to annotate as true or false more than 13,000 other stories that circulated on Twitter.
  • “The comprehensiveness is important here, spanning the entire history of Twitter,” said Jon Kleinberg, a computer scientist at Cornell University. “And this study shines a spotlight on the open question of the success of false information online.”
  • The M.I.T. researchers pointed to factors that contribute to the appeal of false news. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones — maybe not a surprise, since falsehoods are made up.
  • The goal, said Soroush Vosoughi, a postdoctoral researcher at the M.I.T. Media Lab and the lead author, was to find clues about what is “in the nature of humans that makes them like to share false news.”
  • The study analyzed the sentiment expressed by users in replies to claims posted on Twitter. As a measurement tool, the researchers used a system created by Canada’s National Research Council that associates English words with eight emotions
  • False claims elicited replies expressing greater surprise and disgust. True news inspired more anticipation, sadness and joy, depending on the nature of the stories.
  • The M.I.T. researchers said that understanding how false news spreads is a first step toward curbing it. They concluded that human behavior plays a large role in explaining the phenomenon, and mention possible interventions, like better labeling, to alter behavior.
  • For all the concern about false news, there is little certainty about its influence on people’s beliefs and actions. A recent study of the browsing histories of thousands of American adults in the months before the 2016 election found that false news accounted for only a small portion of the total news people consumed.
  • In fall 2016, Mr. Roy, an associate professor at the M.I.T. Media Lab, became a founder and the chairman of Cortico, a nonprofit that is developing tools to measure public conversations online to gauge attributes like shared attention, variety of opinion and receptivity. The idea is that improving the ability to measure such attributes would lead to better decision-making that would counteract misinformation.
  • Mr. Roy acknowledged the challenge in trying to not only alter individual behavior but also in enlisting the support of big internet platforms like Facebook, Google, YouTube and Twitter, and media companies
  • “Polarization,” he said, “has turned out to be a great business model.”
Javier E

The Facebook Fallacy: Privacy Is Up to You - The New York Times - 0 views

  • As Facebook’s co-founder and chief executive parried questions from members of Congress about how the social network would protect its users’ privacy, he returned time and again to what probably sounded like an unimpeachable proposition.
  • By providing its users with greater and more transparent controls over the personal data they share and how it is used for targeted advertising, he insisted, Facebook could empower them to make their own call and decide how much privacy they were willing to put on the block.
  • providing a greater sense of control over their personal data won’t make Facebook users more cautious. It will instead encourage them to share more.
  • ...21 more annotations...
  • “Disingenuous is the adjective I had in my mind,”
  • “Fifteen years ago it would have been legitimate to propose this argument,” he added. “But it is no longer legitimate to ignore the behavioral problems and propose simply more transparency and controls.”
  • Professor Acquisti and two colleagues, Laura Brandimarte and the behavioral economist George Loewenstein, published research on this behavior nearly six years ago. “Providing users of modern information-sharing technologies with more granular privacy controls may lead them to share more sensitive information with larger, and possibly riskier, audiences,” they concluded.
  • the critical question is whether, given the tools, we can be trusted to manage the experience. The increasing body of research into how we behave online suggests not.
  • “Privacy control settings give people more rope to hang themselves,” Professor Loewenstein told me. “Facebook has figured this out, so they give you incredibly granular controls.”
  • This paradox is hardly the only psychological quirk for the social network to exploit. Consider default settings. Tons of research in behavioral economics has found that people tend to stick to the default setting of whatever is offered to them, even when they could change it easily.
  • “Facebook is acutely aware of this,” Professor Loewenstein told me. In 2005, its default settings shared most profile fields with, at most, friends of friends. Nothing was shared by default with the full internet.
  • By 2010, however, likes, name, gender, picture and a lot of other things were shared with everybody online. “Facebook changed the defaults because it appreciated their power,” Professor Loewenstein added.
  • The phenomenon even has a name: the “control paradox.”
  • people who profess concern about privacy will provide the emails of their friends in exchange for some pizza.
  • They also found that providing consumers reassuring though irrelevant information about their ability to protect their privacy will make them less likely to avoid surveillance.
  • Another experiment revealed that people are more willing to come clean about their engagement in illicit or questionable behavior when they believe others have done so, too
  • Those in the industry often argue that people don’t really care about their privacy — that they may seem concerned when they answer surveys, but still routinely accept cookies and consent to have their data harvested in exchange for cool online experiences
  • Professor Acquisti thinks this is a fallacy. The cognitive hurdles to manage our privacy online are simply too steep.
  • While we are good at handling our privacy in the offline world, lowering our voices or closing the curtains as the occasion may warrant, there are no cues online to alert us to a potential privacy invasion
  • Even if we were to know precisely what information companies like Facebook have about us and how it will be used, which we don’t, it would be hard for us to assess potential harms
  • Members of Congress have mostly let market forces prevail online, unfettered by government meddling. Privacy protection in the internet economy has relied on the belief that consumers will make rational choices
  • Europe’s stringent new privacy protection law, which Facebook has promised to apply in the United States, may do better than the American system of disclosure and consen
  • the European system also relies mostly on faith that consumers will make rational choices.
  • The more that psychologists and behavioral economists study psychological biases and quirks, the clearer it seems that rational choices alone won’t work. “I don’t think any kind of disclosure or opt in or opt out is going to protect us from our worst instincts,”
  • What to do? Professor Acquisti suggests flipping the burden of proof. The case for privacy regulation rests on consumers’ proving that data collection is harmful. Why not ask the big online platforms like Facebook to prove they can’t work without it? If reducing data collection imposes a cost, we could figure out who bears it — whether consumers, advertisers or Facebook’s bottom line.
Javier E

What Does "Beauty" Look Like Around the World? - 5 views

  • Like most aspects of our lives, our web surfing habits suffer from naïve provincialism. Image Atlas, an online image search tool that categorizes results by country, offers at least one way to remedy this shortsightedness. 
  • Image Atlas allows you to customize your search by selecting different countries from their list and sorting them either alphabetically or by GDP.
  • We thought it might be interesting to put Image Atlas to the test by choosing a handful of countries and taking a look at the search results for the term “beauty.”
  • ...5 more annotations...
  • seeing them categorized by country does reveal some striking patterns, differences, and commonalities, the most obvious of which is the fact that the images consist overwhelmingly of women.
  • with few exceptions, they tend even in non-Western countries to be of fair-skinned, Western-looking women.
  • even in countries with very different cultural backgrounds, search engines appear to be saturated with a heavily biased, Westernized ideal of attractiveness.
  • There’s a staggering number of beauty products, makeup applicators, and spa treatments, all of which essentially suggest that beauty is something to be purchased and applied.
  • What are some other key search terms that you find particularly revealing?
Javier E

Your class determines how you look your fellow creatures | The Economist - 0 views

  • Each of the 61 volunteers was asked to wear a Google Glass, walk for a block in New York and focus their attention on whatever captured their interest. Their souped-up specs then recorded everything they looked at. Afterwards, they filled out a questionnaire that asked, along with matters of age, sex and ethnicity, about their income, their level of education and the social class they believed they belonged to.
  • they found that the number of gazes at strangers did not vary with social class, but their duration did. Specifically, upper-middle-class and upper-class people gazed at the faces of others for a fifth of a second less than members of lower social classes.
  • Once again, lower-class people looked at faces for more time than those from the upper classes did. Specifically, working-class people spent a tenth of a second longer doing so than did upper-middle class people—a difference that did not apply when the same people looked at inanimate objects.
Javier E

Opinion | A.I. Is Harder Than You Think - The New York Times - 1 views

  • The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data.
  • The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language. Just as you can make infinitely many arithmetic equations by combining a few mathematical symbols and following a small set of rules, you can make infinitely many sentences by combining a modest set of words and a modest set of rules.
  • A genuine, human-level A.I. will need to be able to cope with all of those possible sentences, not just a small fragment of them.
  • ...3 more annotations...
  • No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
  • Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs.
  • That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Javier E

Opinion | Grifters Gone Wild - The New York Times - 0 views

  • Silicon Valley has always had “a flimflam element” and a “fake it ’til you make it” ethos, from the early ’80s, when it was selling vaporware (hardware or software that was more of a concept or work in progress than a workable reality).
  • “We’ve been lionizing and revering these young tech entrepreneurs, treating them not just like princes and princesses but like heroes and icons,” Carreyrou says. “Now that there’s a backlash to Silicon Valley, it will be interesting to see if we reconsider this view that just because you made a lot of money doesn’t necessarily mean that you’re a role model for boys and girls.”
  • Jaron Lanier, the scientist and musician known as the father of virtual reality, has a new book out, “Ten Arguments for Deleting Your Social Media Accounts Right Now.” He says that the business plans of Facebook and Google have served to “elevate the role of the con artist to be central in society.”
  • ...5 more annotations...
  • “Anytime people want to contact each other or have an awareness of each other, it can only be when it’s financed by a third party who wants to manipulate us, to change us in some way or affect how we vote or what we buy,” he says. “In the old days, to be in that unusual situation, you had to be in a cult or a volunteer in an experiment in a psychology building or be in an abusive relationship or at a bogus real estate seminar.
  • “We don’t believe in government,” he says. “A lot of people are pissed at media. They don’t like education. People who used to think the F.B.I. was good now think it’s terrible. With all of these institutions the subject of ridicule, there’s nothing — except Skinner boxes and con artists.”
  • “But now you just need to sign onto Facebook to find yourself in a behavior modification loop, which is the con. And this may destroy our civilization and even our species.”
  • As Maria Konnikova wrote in her book, “The Confidence Game,” “The whirlwind advance of technology heralds a new golden age of the grift. Cons thrive in times of transition and fast change” when we are losing the old ways and open to the unexpected.
  • now narcissistic con artists are dominating the main stage, soaring to great heights and spectacularly exploding
Javier E

Resist the Internet - The New York Times - 0 views

  • Definitely if you’re young, increasingly if you’re old, your day-to-day, minute-to-minute existence is dominated by a compulsion to check email and Twitter and Facebook and Instagram with a frequency that bears no relationship to any communicative need.
  • it requires you to focus intensely, furiously, and constantly on the ephemera that fills a tiny little screen, and experience the traditional graces of existence — your spouse and friends and children, the natural world, good food and great art — in a state of perpetual distraction.
  • It certainly delivers some social benefits, some intellectual advantages, and contributes an important share to recent economic growth.
  • ...9 more annotations...
  • They are the masters; we are not. They are built to addict us, as the social psychologist Adam Alter’s new book “Irresistible” points out — and to madden us, distract us, arouse us and deceive us.
  • We primp and perform for them as for a lover; we surrender our privacy to their demands; we wait on tenterhooks for every “like.” The smartphone is in the saddle, and it rides mankind.
  • the internet, like alcohol, may be an example of a technology that should be sensibly restricted in custom and in law.
  • Used within reasonable limits, of course, these devices also offer us new graces. But we are not using them within reasonable limits.
  • there are also excellent reasons to think that online life breeds narcissism, alienation and depression, that it’s an opiate for the lower classes and an insanity-inducing influence on the politically-engaged, and that it takes more than it gives from creativity and deep thought. Meanwhile the age of the internet has been, thus far, an era of bubbles, stagnation and democratic decay — hardly a golden age whose customs must be left inviolate.
  • So a digital temperance movement would start by resisting the wiring of everything, and seek to create more spaces in which internet use is illegal, discouraged or taboo. Toughen laws against cellphone use in cars, keep computers out of college lecture halls, put special “phone boxes” in restaurants where patrons would be expected to deposit their devices, confiscate smartphones being used in museums and libraries and cathedrals, create corporate norms that strongly discourage checking email in a meeting.
  • Then there are the starker steps. Get computers — all of them — out of elementary schools, where there is no good evidence that they improve learning. Let kids learn from books for years before they’re asked to go online for research; let them play in the real before they’re enveloped by the virtual
  • The age of consent should be 16, not 13, for Facebook accounts. Kids under 16 shouldn’t be allowed on gaming networks. High school students shouldn’t bring smartphones to school. Kids under 13 shouldn’t have them at all.
  • I suspect that versions of these ideas will be embraced within my lifetime by a segment of the upper class and a certain kind of religious family. But the masses will still be addicted, and the technology itself will have evolved to hook and immerse — and alienate and sedate — more completely and efficiently.
sissij

Why Instagram Is Becoming Facebook's Next Facebook - The New York Times - 1 views

  • Instagram has thus triggered an echo — it feels like Facebook. More precisely, it feels the way Facebook did from 2009 to 2012, when it silently crossed over from one of those tech things that some people sometimes did to one of those tech things that everyone you know does every day.
  • But last year, you might have said there was a question whether a picture-based service like Instagram could have reached similar scale — whether it was universal enough, whether there were enough people whose phones could handle it, whether it could survive greater competition from newer photo networks like Snapchat.
  • Mr. Systrom said this plan to rapidly speed up Instagram’s pace of change to attract more users was deliberate.
  •  
    I think the rise of the social media partly shows the changes in our society. By looking at how a social media make their success, we can see what the mainstream culture of the society is.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
sissij

What 'Snowflakes' Get Right About Free Speech - The New York Times - 0 views

  • “Madame, you are an experience, but not an argument.”
  • it has taken on renewed significance as the struggles on American campuses to negotiate issues of free speech have intensified — most recently in protests at Auburn University against a visit by the white nationalist Richard Spencer.
  • Lanzmann’s blunt reply favored reasoned analysis over personal memory.
  • ...7 more annotations...
  • Freedom of expression became a flash point in this shift.
  • Then as now, both liberals and conservatives were wary of the privileging of personal experience, with its powerful emotional impact, over reason and argument, which some fear will bring an end to civilization, or at least to freedom of speech.
  • “The Postmodern Condition” of how public discourse discards the categories of true/false and just/unjust in favor of valuing the mere fact that something is being communicated, examined the tension between experience and argument in a different way.
  • Lyotard focused on the asymmetry of different positions when personal experience is challenged by abstract arguments.
  • The rights of transgender people for legal equality and protection against discrimination are a current example in a long history of such redefinitions.
  • The idea of freedom of speech does not mean a blanket permission to say anything anybody thinks. It means balancing the inherent value of a given view with the obligation to ensure that other members of a given community can participate in discourse as fully recognized members of that community.
  • which aim to educate students in how to belong to various communities — should not mean that someone’s humanity, or their right to participate in political speech as political agents, can be freely attacked, demeaned or questioned.
  •  
    This article reminds me of the topic we discussed today in TOK class. The modern paradigm doesn't provide us any positive guidance. It states that god does not exist. Since there is not limits in this paradigm, people can do whatever they want theoretically. I think this freedom of speech has the same problem. Sometimes people use the freedom of speech as their shield of saying things that hurts others' feelings. Freedom should some limits. --Sissi (4/24/2017)
anonymous

BBC - Future - The eight-day guide to a better digital life - 0 views

  • The eight-day guide to a better digital life
  • It was designed by the non-profit groups Mozilla and the Tactical Technology Collective to coincide with The Glass Room, a pop-up experience in London that invited visitors to look at what happens to their data behind the scenes. They recognise that we can’t transform years of online behaviour – instead, the Detox is about trying to help us make more informed data choices in the future.“In less than half an hour every day, over the course of eight days, people can slim down their ‘data bloat’ with easy, practical steps,” the curators of The Glass Room told me. “We hope that the Data Detox Kit will help people think differently about data collection.”
  • The first day is, essentially, about scaring you into realising how much of you is online via search engines.
  • ...3 more annotations...
  • You can delete the activity that Google stores, which the Detox tells you how to do.
  • For a start, every wi-fi network you connect to sees a list of the other networks you’ve connected to in the past, and most networks are given an easily identifiable name.
  • “It’s a crucial part of making your new digital lifestyle work, and their actions online matter. Every time they tag you, mention you or upload data about you, it adds to your data build-up, no matter how conscientious you’ve been.”
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

There's No Such Thing As 'Sound Science' | FiveThirtyEight - 1 views

  • cience is being turned against itself. For decades, its twin ideals of transparency and rigor have been weaponized by those who disagree with results produced by the scientific method. Under the Trump administration, that fight has ramped up again.
  • The same entreaties crop up again and again: We need to root out conflicts. We need more precise evidence. What makes these arguments so powerful is that they sound quite similar to the points raised by proponents of a very different call for change that’s coming from within science.
  • Despite having dissimilar goals, the two forces espouse principles that look surprisingly alike: Science needs to be transparent. Results and methods should be openly shared so that outside researchers can independently reproduce and validate them. The methods used to collect and analyze data should be rigorous and clear, and conclusions must be supported by evidence.
  • ...26 more annotations...
  • they’re also used as talking points by politicians who are working to make it more difficult for the EPA and other federal agencies to use science in their regulatory decision-making, under the guise of basing policy on “sound science.” Science’s virtues are being wielded against it.
  • What distinguishes the two calls for transparency is intent: Whereas the “open science” movement aims to make science more reliable, reproducible and robust, proponents of “sound science” have historically worked to amplify uncertainty, create doubt and undermine scientific discoveries that threaten their interests.
  • “Our criticisms are founded in a confidence in science,” said Steven Goodman, co-director of the Meta-Research Innovation Center at Stanford and a proponent of open science. “That’s a fundamental difference — we’re critiquing science to make it better. Others are critiquing it to devalue the approach itself.”
  • alls to base public policy on “sound science” seem unassailable if you don’t know the term’s history. The phrase was adopted by the tobacco industry in the 1990s to counteract mounting evidence linking secondhand smoke to cancer.
  • The sound science tactic exploits a fundamental feature of the scientific process: Science does not produce absolute certainty. Contrary to how it’s sometimes represented to the public, science is not a magic wand that turns everything it touches to truth. Instead, it’s a process of uncertainty reduction, much like a game of 20 Questions.
  • Any given study can rarely answer more than one question at a time, and each study usually raises a bunch of new questions in the process of answering old ones. “Science is a process rather than an answer,” said psychologist Alison Ledgerwood of the University of California, Davis. Every answer is provisional and subject to change in the face of new evidence. It’s not entirely correct to say that “this study proves this fact,” Ledgerwood said. “We should be talking instead about how science increases or decreases our confidence in something.”
  • While insisting that they merely wanted to ensure that public policy was based on sound science, tobacco companies defined the term in a way that ensured that no science could ever be sound enough. The only sound science was certain science, which is an impossible standard to achieve.
  • “Doubt is our product,” wrote one employee of the Brown & Williamson tobacco company in a 1969 internal memo. The note went on to say that doubt “is the best means of competing with the ‘body of fact’” and “establishing a controversy.” These strategies for undermining inconvenient science were so effective that they’ve served as a sort of playbook for industry interests ever since
  • Doubt merchants aren’t pushing for knowledge, they’re practicing what Proctor has dubbed “agnogenesis” — the intentional manufacture of ignorance. This ignorance isn’t simply the absence of knowing something; it’s a lack of comprehension deliberately created by agents who don’t want you to know,
  • In the hands of doubt-makers, transparency becomes a rhetorical move. “It’s really difficult as a scientist or policy maker to make a stand against transparency and openness, because well, who would be against it?
  • But at the same time, “you can couch everything in the language of transparency and it becomes a powerful weapon.” For instance, when the EPA was preparing to set new limits on particulate pollution in the 1990s, industry groups pushed back against the research and demanded access to primary data (including records that researchers had promised participants would remain confidential) and a reanalysis of the evidence. Their calls succeeded and a new analysis was performed. The reanalysis essentially confirmed the original conclusions, but the process of conducting it delayed the implementation of regulations and cost researchers time and money.
  • Delay is a time-tested strategy. “Gridlock is the greatest friend a global warming skeptic has,” said Marc Morano, a prominent critic of global warming research
  • which has received funding from the oil and gas industry. “We’re the negative force. We’re just trying to stop stuff.”
  • these ploys are getting a fresh boost from Congress. The Data Quality Act (also known as the Information Quality Act) was reportedly written by an industry lobbyist and quietly passed as part of an appropriations bill in 2000. The rule mandates that federal agencies ensure the “quality, objectivity, utility, and integrity of information” that they disseminate, though it does little to define what these terms mean. The law also provides a mechanism for citizens and groups to challenge information that they deem inaccurate, including science that they disagree with. “It was passed in this very quiet way with no explicit debate about it — that should tell you a lot about the real goals,” Levy said.
  • in the 20 months following its implementation, the act was repeatedly used by industry groups to push back against proposed regulations and bog down the decision-making process. Instead of deploying transparency as a fundamental principle that applies to all science, these interests have used transparency as a weapon to attack very particular findings that they would like to eradicate.
  • Now Congress is considering another way to legislate how science is used. The Honest Act, a bill sponsored by Rep. Lamar Smith of Texas,3The bill has been passed by the House but still awaits a vote in the Senate. is another example of what Levy calls a “Trojan horse” law that uses the language of transparency as a cover to achieve other political goals. Smith’s legislation would severely limit the kind of evidence the EPA could use for decision-making. Only studies whose raw data and computer codes were publicly available would be allowed for consideration.
  • It might seem like an easy task to sort good science from bad, but in reality it’s not so simple. “There’s a misplaced idea that we can definitively distinguish the good from the not-good science, but it’s all a matter of degree,” said Brian Nosek, executive director of the Center for Open Science. “There is no perfect study.” Requiring regulators to wait until they have (nonexistent) perfect evidence is essentially “a way of saying, ‘We don’t want to use evidence for our decision-making,’
  • ost scientific controversies aren’t about science at all, and once the sides are drawn, more data is unlikely to bring opponents into agreement.
  • objective knowledge is not enough to resolve environmental controversies. “While these controversies may appear on the surface to rest on disputed questions of fact, beneath often reside differing positions of value; values that can give shape to differing understandings of what ‘the facts’ are.” What’s needed in these cases isn’t more or better science, but mechanisms to bring those hidden values to the forefront of the discussion so that they can be debated transparently. “As long as we continue down this unabashedly naive road about what science is, and what it is capable of doing, we will continue to fail to reach any sort of meaningful consensus on these matters,”
  • The dispute over tobacco was never about the science of cigarettes’ link to cancer. It was about whether companies have the right to sell dangerous products and, if so, what obligations they have to the consumers who purchased them.
  • Similarly, the debate over climate change isn’t about whether our planet is heating, but about how much responsibility each country and person bears for stopping it
  • While researching her book “Merchants of Doubt,” science historian Naomi Oreskes found that some of the same people who were defending the tobacco industry as scientific experts were also receiving industry money to deny the role of human activity in global warming. What these issues had in common, she realized, was that they all involved the need for government action. “None of this is about the science. All of this is a political debate about the role of government,”
  • These controversies are really about values, not scientific facts, and acknowledging that would allow us to have more truthful and productive debates. What would that look like in practice? Instead of cherry-picking evidence to support a particular view (and insisting that the science points to a desired action), the various sides could lay out the values they are using to assess the evidence.
  • For instance, in Europe, many decisions are guided by the precautionary principle — a system that values caution in the face of uncertainty and says that when the risks are unclear, it should be up to industries to show that their products and processes are not harmful, rather than requiring the government to prove that they are harmful before they can be regulated. By contrast, U.S. agencies tend to wait for strong evidence of harm before issuing regulations
  • the difference between them comes down to priorities: Is it better to exercise caution at the risk of burdening companies and perhaps the economy, or is it more important to avoid potential economic downsides even if it means that sometimes a harmful product or industrial process goes unregulated?
  • But science can’t tell us how risky is too risky to allow products like cigarettes or potentially harmful pesticides to be sold — those are value judgements that only humans can make.
Javier E

If Russia can create fake 'Black Lives Matter' accounts, who will next? - The Washingto... - 2 views

  • As in the past, the Russian advertisements did not create ethnic strife or political divisions, either in the United States or in Europe. Instead, they used divisive language and emotive messages to exacerbate existing divisions.
  • The real problem is far broader than Russia: Who will use these methods next — and how?
  • I can imagine multiple groups, many of them proudly American, who might well want to manipulate a range of fake accounts during a riot or disaster to increase anxiety or fear.
  • ...3 more annotations...
  • There is no big barrier to entry in this game: It doesn’t cost much, it doesn’t take much time, it isn’t particularly high-tech, and it requires no special equipment.
  • Facebook, Google and Twitter, not Russia, have provided the technology to create fake accounts and false advertisements, as well as the technology to direct them at particular parts of the population.
  • There is no reason existing laws on transparency in political advertising, on truth in advertising or indeed on libel should not apply to social media as well as traditional media. There is a better case than ever against anonymity, at least against anonymity in the public forums of social media and comment sections, as well as for the elimination of social-media bots.
Javier E

The Epidemic of Facelessness - NYTimes.com - 1 views

  • The fact that the case ended up in court is rare; the viciousness it represents is not. Everyone in the digital space is, at one point or another, exposed to online monstrosity, one of the consequences of the uniquely contemporary condition of facelessness.
  • There is a vast dissonance between virtual communication and an actual police officer at the door. It is a dissonance we are all running up against more and more, the dissonance between the world of faces and the world without faces. And the world without faces is coming to dominate.
  • Inability to see a face is, in the most direct way, inability to recognize shared humanity with another. In a metastudy of antisocial populations, the inability to sense the emotions on other people’s faces was a key correlation. There is “a consistent, robust link between antisocial behavior and impaired recognition of fearful facial affect. Relative to comparison groups, antisocial populations showed significant impairments in recognizing fearful, sad and surprised expressions.”
  • ...16 more annotations...
  • the faceless communication social media creates, the linked distances between people, both provokes and mitigates the inherent capacity for monstrosity.
  • The Gyges effect, the well-noted disinhibition created by communications over the distances of the Internet, in which all speech and image are muted and at arm’s reach, produces an inevitable reaction — the desire for impact at any cost, the desire to reach through the screen, to make somebody feel something, anything. A simple comment can so easily be ignored. Rape threat? Not so much. Or, as Mr. Nunn so succinctly put it on Twitter: “If you can’t threaten to rape a celebrity, what is the point in having them?”
  • The challenge of our moment is that the face has been at the root of justice and ethics for 2,000 years.
  • The precondition of any trial, of any attempt to reconcile competing claims, is that the victim and the accused look each other in the face.
  • For the great French-Jewish philosopher Emmanuel Levinas, the encounter with another’s face was the origin of identity — the reality of the other preceding the formation of the self. The face is the substance, not just the reflection, of the infinity of another person. And from the infinity of the face comes the sense of inevitable obligation, the possibility of discourse, the origin of the ethical impulse.
  • “Through imitation and mimicry, we are able to feel what other people feel. By being able to feel what other people feel, we are also able to respond compassionately to other people’s emotional states.” The face is the key to the sense of intersubjectivity, linking mimicry and empathy through mirror neurons — the brain mechanism that creates imitation even in nonhuman primates.
  • it’s also no mere technical error on the part of Twitter; faceless rage is inherent to its technology.
  • Without a face, the self can form only with the rejection of all otherness, with a generalized, all-purpose contempt — a contempt that is so vacuous because it is so vague, and so ferocious because it is so vacuous. A world stripped of faces is a world stripped, not merely of ethics, but of the biological and cultural foundations of ethics.
  • The spirit of facelessness is coming to define the 21st. Facelessness is not a trend; it is a social phase we are entering that we have not yet figured out how to navigate.
  • the flight back to the face takes on new urgency. Google recently reported that on Android alone, which has more than a billion active users, people take 93 million selfies a day
  • Emojis are an explicit attempt to replicate the emotional context that facial expression provides. Intriguingly, emojis express emotion, often negative emotions, but you cannot troll with them.
  • But all these attempts to provide a digital face run counter to the main current of our era’s essential facelessness. The volume of digital threats appears to be too large for police forces to adequately deal with.
  • The more established wisdom about trolls, at this point, is to disengage. Obviously, in many cases, actual crimes are being committed, crimes that demand confrontation, by victims and by law enforcement officials, but in everyday digital life engaging with the trolls “is like trying to drown a vampire with your own blood,”
  • There is a third way, distinct from confrontation or avoidance: compassion
  • we need a new art of conversation for the new conversations we are having — and the first rule of that art must be to remember that we are talking to human beings: “Never say anything online that you wouldn’t say to somebody’s face.” But also: “Don’t listen to what people wouldn’t say to your face.”
  • The neurological research demonstrates that empathy, far from being an artificial construct of civilization, is integral to our biology.
« First ‹ Previous 181 - 200 of 265 Next › Last »
Showing 20 items per page