Skip to main content

Home/ TOK Friends/ Group items tagged dice

Rss Feed Group items tagged

knudsenlu

The History of Dice Reflects Beliefs in Fate and Chance - The Atlantic - 0 views

  • Dice, in their standard six-sided form, seem like the simplest kind of device—almost a classic embodiment of chance. But a new study of more than 100 examples from the last 2,000 years or so unearthed in the Netherlands shows that they have not always looked exactly the way they do now. What’s more, the shifts in dice’s appearance may reflect people’s changing sense of what exactly is behind a roll—fate, or probability.
  • Did it matter to game players that these dice were not fair? “We don’t know for sure,” Eerkens says. The way Romans wrote about dice falls suggests they were regarded as signs of supernatural favor or of a player’s fortune, however. The archaeologist Ellen Swift, in her book Roman Artifacts and Society, writes that high rolls had associations of benevolence and felicity, and that rolling three sixes at the same time seems to have been called a Venus. “Dice potentially played an important role in conceptualizing divine action in the world,” she writes.
  • At the same time, dice tend to get smaller, perhaps to make them easier to hide, as gambling was not favored by the increasingly powerful religious authorities. Some of the study’s dice from this era, in fact, were found all together in a small hole in a garbage heap, including one cheater’s die with an extra three. Could it be someone saw the light and forswore dice?
  • ...3 more annotations...
  • . In the 17th century, even Galileo writes about why, in a game with three dice, the number 10 should come up more than the number 9. It’s an observation that one would only be able to detect after thousands of rolls, but the reason behind it involves how many combinations of numbers can add up to each different option.
  • All these changes in dice come about, says Eerkens, “as different astronomers are coming up with new ideas about the world, and mathematicians are starting to understand numbers and probability.” Which came first: Did people begin to intuitively understand what true chance felt like, and adjusted dice accordingly, or did it trickle out from what would eventually become known as the scientific community?
  • And it does make you wonder: Would you know that fair dice should fall equally on each side, or that in a three-dice game, it’s better to bet on 10 than nine, if someone hadn’t told you?
johnsonle1

The Universe Is as Spooky as Einstein Thought - The Atlantic - 0 views

  •  
    According to standard quantum theory, particles have no definite states, only relative probabilities of being one thing or another-at least, until they are measured, when they seem to suddenly roll the dice and jump into formation.
Emilio Ergueta

Lessons from Gaming #2: Random Universe | Talking Philosophy - 0 views

  • My experiences as a tabletop and video gamer have taught me numerous lessons that are applicable to the real world (assuming there is such a thing). One key skill in getting about in reality is the ability to model reality.
  • Many games, such as Call of Cthulhu, D&D, Pathfinder and Star Fleet Battles make extensive use of dice to model the vagaries of reality.
  • Being a gamer, it is natural for me to look at reality as also being random—after all, if a random model (gaming system) nicely fits aspects of reality, then that suggests the model has things right. As such, I tend to think of this as being a random universe in which God (or whatever) plays dice with us.
  • ...6 more annotations...
  • I do not know if the universe is random (contains elements of chance). After all, we tend to attribute chance to the unpredictable, but this unpredictability might be a matter of ignorance rather than chance.
  • even if things could have been different it does not follow that chance is real. After all, chance is not the only thing that could make a difference.
  • Obviously, there is no way to prove that choice occurs—as with chance versus determinism, without simply knowing the brute fact about choice there is no way to know whether the universe allows for choice or not.
  • : because of chance, the results of any choice cannot be known with certainty
  • if things can fail or go wrong because of chance, then it makes sense to be more forgiving and understanding of failure—at least when the failure can be attributed in part to chance.
  • the role of chance in success and failure should be considered when planning and creating policies.
Javier E

Assessing Kurzweil: the results - Less Wrong - 0 views

  • when talking about unprecedented future events such as nanotechnology or AI, the choice of the model is also dependent on expert judgement.
  • In various books, he's made predictions about what would happen in 2009, and we're now in a position to judge their accuracy. I haven't been satisfied by the various accuracy ratings I've found online, so I decided to do my own assessments.
  • Ray Kurzweil has a model of technological intelligence development where, broadly speaking, evolution, pre-computer technological development, post-computer technological development and future AIs all fit into the same exponential increase.
  • ...1 more annotation...
  • relying on a single assessor is unreliable, especially when some of the judgements are subjective. So I started a call for volunteers to get assessors. Meanwhile Malo Bourgon set up a separate assessment on Youtopia, harnessing the awesome power of altruists chasing after points. The results are now in, and they are fascinating. They are...
kushnerha

The Psychology of Risk Perception Explains Why People Don't Fret the Pacific Northwest'... - 0 views

  • what psychology teaches us. Turns out most of us just aren’t that good at calculating risk, especially when it comes to huge natural events like earthquakes. That also means we’re not very good at mitigating those kinds of risks. Why? And is it possible to get around our short-sightedness, so that this time, we’re actually prepared? Risk perception is a vast, complex field of research. Here are just some of the core findings.
  • Studies show that when people calculate risk, especially when the stakes are high, we rely much more on feeling than fact. And we have trouble connecting emotionally to something scary if the odds of it happening today or tomorrow aren’t particularly high. So, if an earthquake, flood, tornado or hurricane isn’t immediately imminent, people are unlikely to act. “Perceiving risk is all about how scary or not do the facts feel,”
  • This feeling also relates to how we perceive natural, as opposed to human-made, threats. We tend to be more tolerant of nature than of other people who would knowingly impose risks upon us—terrorists being the clearest example. “We think that nature is out of our control—it’s not malicious, it’s not profiting from us, we just have to bear with it,”
  • ...8 more annotations...
  • And in many cases, though not all, people living in areas threatened by severe natural hazards do so by choice. If a risk has not been imposed on us, we take it much less seriously. Though Schulz’s piece certainly made a splash online, it is hard to imagine a mass exodus of Portlanders and Seattleites in response. Hey, they like it there.
  • They don’t have much to compare the future earthquake to. After all, there hasn’t been an earthquake or tsunami like it there since roughly 1700. Schulz poeticizes this problem, calling out humans for their “ignorance of or an indifference to those planetary gears which turn more slowly than our own.” Once again, this confounds our emotional connection to the risk.
  • But our “temporal parochialism,” as Schulz calls it, also undoes our grasp on probability. “We think probability happens with some sort of regularity or pattern,” says Ropeik. “If an earthquake is projected to hit within 50 years, when there hasn’t been one for centuries, we don’t think it’s going to happen.” Illogical thinking works in reverse, too: “If a minor earthquake just happened in Seattle, we think we’re safe.”
  • The belief that an unlikely event won’t happen again for a while is called a gambler’s fallacy. Probability doesn’t work like that. The odds are the same with every roll of the dice.
  • For individuals and government alike, addressing every point of concern requires a cost-benefit analysis. When kids barely have pencils and paper in schools that already exist, how much is appropriate to invest in earthquake preparedness? Even when that earthquake will kill thousands, displace millions, and cripple a region’s economy for decades to come—as Cascadia is projected to—the answer is complicated. “You immediately run into competing issues,” says Slovic. “When you’re putting resources into earthquake protection that has to be taken away from current social needs—that is a very difficult sell.”​
  • There are things people can do to combat our innate irrationality. The first is obvious: education. California has a seismic safety commission whose job is to publicize the risks of earthquakes and advocate for preparedness at household and state policy levels.
  • Another idea is similar to food safety ratings in the windows of some cities’ restaurants. Schulz reports that some 75 percent of Oregon’s structures aren’t designed to hold up to a really big Cascadia quake. “These buildings could have their risk and safety score publicly posted,” says Slovic. “That would motivate people to retrofit or mitigate those risks, particularly if they are schools.”
  • science points to a hard truth. Humans are simply inclined to be more concerned about what’s immediately in front of us: Snakes, fast-moving cars, unfamiliar chemical compounds in our breakfast cereal and the like will always elicit a quicker response than an abstract, far-off hazard.
Javier E

Here's what the government's dietary guidelines should really say - The Washington Post - 0 views

  • If I were writing the dietary guidelines, I would give them a radical overhaul. I’d go so far as to radically overhaul the way we evaluate diet. Here’s why and how.
  • Lately, as scientists try, and fail, to reproduce results, all of science is taking a hard look at funding biases, statistical shenanigans and groupthink. All that criticism, and then some, applies to nutrition.
  • Prominent in the charge to change the way we do science is John Ioannidis, professor of health research and policy at Stanford University. In 2005, he published “Why Most Research Findings Are False” in the journal PLOS Medicin
  • ...15 more annotations...
  • He came down hard on nutrition in a pull-no-punches 2013 British Medical Journal editorial titled, “Implausible results in human nutrition research,” in which he noted, “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome.”
  • Ioannidis told me that sussing out the connection between diet and health — nutritional epidemiology — is enormously challenging, and “the tools that we’re throwing at the problem are not commensurate with the complexity and difficulty of the problem.” The biggest of those tools is observational research, in which we collect data on what people eat, and track what happens to them.
  • He lists plant-based foods — fruit, veg, whole grains, legumes — but acknowledges that we don’t understand enough to prescribe specific combinations or numbers of servings.
  • funding bias isn’t the only kind. “Fanatical opinions abound in nutrition,” Ioannidis wrote in 2013, and those have bias power too.
  • “Definitive solutions won’t come from another million observational papers or small randomized trials,” reads the subtitle of Ioannidis’s paper. His is a burn-down-the-house ethos.
  • When it comes to actual dietary recommendations, the disagreement is stark. “Ioannidis and others say we have no clue, the science is so bad that we don’t know anything,” Hu told me. “I think that’s completely bogus. We know a lot about the basic elements of a healthy diet.”
  • Give tens of thousands of people that FFQ, and you end up with a ginormous repository of possible correlations. You can zero in on a vitamin, macronutrient or food, and go to town. But not only are you starting with flawed data, you’ve got a zillion possible confounding variables — dietary, demographic, socioeconomic. I’ve heard statisticians call it “noise mining,” and Ioannidis is equally skeptical. “With this type of data, you can get any result you want,” he said. “You can align it to your beliefs.”
  • Big differences in what people eat track with other differences. Heavy plant-eaters are different from, say, heavy meat-eaters in all kinds of ways (income, education, physical activity, BMI). Red meat consumption correlates with increased risk of dying in an accident as much as dying from heart disease. The amount of faith we put in observational studies is a judgment call.
  • I find myself in Ioannidis’s camp. What have we learned, unequivocally enough to build a consensus in the nutrition community, about how diet affects health? Well, trans-fats are bad.
  • Over and over, large population studies get sliced and diced, and it’s all but impossible to figure out what’s signal and what’s noise. Researchers try to do that with controlled trials to test the connections, but those have issues too. They’re expensive, so they’re usually small and short-term. People have trouble sticking to the diet being studied. And scientists are generally looking for what they call “surrogate endpoints,” like increased cholesterol rather than death from heart disease, since it’s impractical to keep a trial going until people die.
  • , what do we do? Hu and Ioannidis actually have similar suggestions. For starters, they both think we should be looking at dietary patterns rather than single foods or nutrients. They also both want to look across the data sets. Ioannidis emphasizes transparency. He wants to open data to the world and analyze all the data sets in the same way to see if “any signals survive.” Hu is more cautious (partly to safeguard confidentiality
  • I have a suggestion. Let’s give up on evidence-based eating. It’s given us nothing but trouble and strife. Our tools can’t find any but the most obvious links between food and health, and we’ve found those already.
  • Instead, let’s acknowledge the uncertainty and eat to hedge against what we don’t know
  • We’ve got two excellent hedges: variety and foods with nutrients intact (which describes such diets as the Mediterranean, touted by researchers). If you severely limit your foods (vegan, keto), you might miss out on something. Ditto if you eat foods with little nutritional value (sugar, refined grains). Oh, and pay attention to the two things we can say with certainty: Keep your weight down, and exercise.
  • I used to say I could tell you everything important about diet in 60 seconds. Over the years, my spiel got shorter and shorter as truisms fell by the wayside, and my confidence waned in a field where we know less, rather than more, over time. I’m down to five seconds now: Eat a wide variety of foods with their nutrients intact, keep your weight down and get some exercise.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

Silicon Valley Is Not Your Friend - The New York Times - 0 views

  • By all accounts, these programmers turned entrepreneurs believed their lofty words and were at first indifferent to getting rich from their ideas. A 1998 paper by Sergey Brin and Larry Page, then computer-science graduate students at Stanford, stressed the social benefits of their new search engine, Google, which would be open to the scrutiny of other researchers and wouldn’t be advertising-driven.
  • The Google prototype was still ad-free, but what about the others, which took ads? Mr. Brin and Mr. Page had their doubts: “We expect that advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”
  • He was concerned about them as young students lacking perspective about life and was worried that these troubled souls could be our new leaders. Neither Mr. Weizenbaum nor Mr. McCarthy mentioned, though it was hard to miss, that this ascendant generation were nearly all white men with a strong preference for people just like themselves. In a word, they were incorrigible, accustomed to total control of what appeared on their screens. “No playwright, no stage director, no emperor, however powerful,” Mr. Weizenbaum wrote, “has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.”
  • ...7 more annotations...
  • In his epic anti-A.I. work from the mid-1970s, “Computer Power and Human Reason,” Mr. Weizenbaum described the scene at computer labs. “Bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be as riveted as a gambler’s on the rolling dice,” he wrote. “They exist, at least when so engaged, only through and for the computers. These are computer bums, compulsive programmers.”
  • Welcome to Silicon Valley, 2017.
  • As Mr. Weizenbaum feared, the current tech leaders have discovered that people trust computers and have licked their lips at the possibilities. The examples of Silicon Valley manipulation are too legion to list: push notifications, surge pricing, recommended friends, suggested films, people who bought this also bought that.
  • Growth becomes the overriding motivation — something treasured for its own sake, not for anything it brings to the world
  • Facebook and Google can point to a greater utility that comes from being the central repository of all people, all information, but such market dominance has obvious drawbacks, and not just the lack of competition. As we’ve seen, the extreme concentration of wealth and power is a threat to our democracy by making some people and companies unaccountable.
  • As is becoming obvious, these companies do not deserve the benefit of the doubt. We need greater regulation, even if it impedes the introduction of new services.
  • We need to break up these online monopolies because if a few people make the decisions about how we communicate, shop, learn the news, again, do we control our own society?
Javier E

Nobel Prize in Physics Is Awarded to 3 Scientists for Work Exploring Quantum Weirdness ... - 0 views

  • “We’re used to thinking that information about an object — say that a glass is half full — is somehow contained within the object.” Instead, he says, entanglement means objects “only exist in relation to other objects, and moreover these relationships are encoded in a wave function that stands outside the tangible physical universe.”
  • Einstein, though one of the founders of quantum theory, rejected it, saying famously, God did not play dice with the universe.In a 1935 paper written with Boris Podolsky and Nathan Rosen, he tried to demolish quantum mechanics as an incomplete theory by pointing out that by quantum rules, measuring a particle in one place could instantly affect measurements of the other particle, even if it was millions of miles away.
  • Dr. Clauser, who has a knack for electronics and experimentation and misgivings about quantum theory, was the first to perform Bell’s proposed experiment. He happened upon Dr. Bell’s paper while a graduate student at Columbia University and recognized it as something he could do.
  • ...13 more annotations...
  • In 1972, using duct tape and spare parts in the basement on the campus of the University of California, Berkeley, Dr. Clauser and a graduate student, Stuart Freedman, who died in 2012, endeavored to perform Bell’s experiment to measure quantum entanglement. In a series of experiments, he fired thousands of light particles, or photons, in opposite directions to measure a property known as polarization, which could have only two values — up or down. The result for each detector was always a series of seemingly random ups and downs. But when the two detectors’ results were compared, the ups and downs matched in ways that neither “classical physics” nor Einstein’s laws could explain. Something weird was afoot in the universe. Entanglement seemed to be real.
  • in 2002, Dr. Clauser admitted that he himself had expected quantum mechanics to be wrong and Einstein to be right. “Obviously, we got the ‘wrong’ result. I had no choice but to report what we saw, you know, ‘Here’s the result.’ But it contradicts what I believed in my gut has to be true.” He added, “I hoped we would overthrow quantum mechanics. Everyone else thought, ‘John, you’re totally nuts.’”
  • the correlations only showed up after the measurements of the individual particles, when the physicists compared their results after the fact. Entanglement seemed real, but it could not be used to communicate information faster than the speed of light.
  • In 1982, Dr. Aspect and his team at the University of Paris tried to outfox Dr. Clauser’s loophole by switching the direction along which the photons’ polarizations were measured every 10 nanoseconds, while the photons were already in the air and too fast for them to communicate with each other. He too, was expecting Einstein to be right.
  • Quantum predictions held true, but there were still more possible loopholes in the Bell experiment that Dr. Clauser had identified
  • For example, the polarization directions in Dr. Aspect’s experiment had been changed in a regular and thus theoretically predictable fashion that could be sensed by the photons or detectors.
  • Anton Zeilinger
  • added even more randomness to the Bell experiment, using random number generators to change the direction of the polarization measurements while the entangled particles were in flight.
  • Once again, quantum mechanics beat Einstein by an overwhelming margin, closing the “locality” loophole.
  • as scientists have done more experiments with entangled particles, entanglement is accepted as one of main features of quantum mechanics and is being put to work in cryptology, quantum computing and an upcoming “quantum internet
  • One of its first successes in cryptology is messages sent using entangled pairs, which can send cryptographic keys in a secure manner — any eavesdropping will destroy the entanglement, alerting the receiver that something is wrong.
  • , with quantum mechanics, just because we can use it, doesn’t mean our ape brains understand it. The pioneering quantum physicist Niels Bohr once said that anyone who didn’t think quantum mechanics was outrageous hadn’t understood what was being said.
  • In his interview with A.I.P., Dr. Clauser said, “I confess even to this day that I still don’t understand quantum mechanics, and I’m not even sure I really know how to use it all that well. And a lot of this has to do with the fact that I still don’t understand it.”
1 - 9 of 9
Showing 20 items per page