Skip to main content

Home/ TOK Friends/ Group items matching "economist" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Uber's Business Model Could Change Your Work - NYTimes.com - 0 views

  • Just as Uber is doing for taxis, new technologies have the potential to chop up a broad array of traditional jobs into discrete tasks that can be assigned to people just when they’re needed, with wages set by a dynamic measurement of supply and demand, and every worker’s performance constantly tracked, reviewed and subject to the sometimes harsh light of customer satisfaction.
  • Uber and its ride-sharing competitors, including Lyft and Sidecar, are the boldest examples of this breed, which many in the tech industry see as a new kind of start-up — one whose primary mission is to efficiently allocate human beings and their possessions, rather than information.
  • “I do think we are defining a new category of work that isn’t full-time employment but is not running your own business either,”
  • ...11 more annotations...
  • Various companies are now trying to emulate Uber’s business model in other fields, from daily chores like grocery shopping and laundry to more upmarket products like legal services and even medicine.
  • “We may end up with a future in which a fraction of the work force would do a portfolio of things to generate an income — you could be an Uber driver, an Instacart shopper, an Airbnb host and a Taskrabbit,”
  • But the rise of such work could also make your income less predictable and your long-term employment less secure. And it may relegate the idea of establishing a lifelong career to a distant memory.
  • “This on-demand economy means a work life that is unpredictable, doesn’t pay very well and is terribly insecure.” After interviewing many workers in the on-demand world, Dr. Reich said he has concluded that “most would much rather have good, well-paying, regular jobs.”
  • Proponents of on-demand work point out that many of the tech giants that sprang up over the last decade minted billions in profits without hiring very many people; Facebook, for instance, serves more than a billion users, but employs only a few thousand highly skilled workers, most of them in California.
  • at the end of 2014, Uber had 160,000 drivers regularly working for it in the United States. About 40,000 new drivers signed up in December alone, and the number of sign-ups was doubling every six months.
  • The report found that on average, Uber’s drivers worked fewer hours and earned more per hour than traditional taxi drivers, even when you account for their expenses. That conclusion, though, has raised fierce debate among economists, because it’s not clear how much Uber drivers really are paying in expenses. Drivers on the service use their own cars and pay for their gas; taxi drivers generally do not.
  • A survey of Uber drivers contained in the report found that most were already employed full or part time when they found Uber, and that earning an additional income on the side was a primary benefit of driving for Uber.
  • The larger worry about on-demand jobs is not about benefits, but about a lack of agency — a future in which computers, rather than humans, determine what you do, when and for how much. The rise of Uber-like jobs is the logical culmination of an economic and tech system that holds efficiency as its paramount virtue.
  • “These services are successful because they are tapping into people’s available time more efficiently,” Dr. Sundararajan said. “You could say that people are monetizing their own downtime.”Think about that for a second; isn’t “monetizing downtime” a hellish vision of the future of work?
  • “I’m glad if people like working for Uber, but those subjective feelings have got to be understood in the context of there being very few alternatives,” Dr. Reich said. “Can you imagine if this turns into a Mechanical Turk economy, where everyone is doing piecework at all odd hours, and no one knows when the next job will come, and how much it will pay? What kind of private lives can we possibly have, what kind of relationships, what kind of families?”
Javier E

Economics: A new focus on context and connections - 0 views

  • THE basic principles of economics have not changed—people and firms respond to incentives; demand and supply determine the relative prices of goods, services and even money itself; markets generally allocate resources well and deliver welfare-improving outcomes. However, the notion that markets are always efficient, can be left to themselves and are self-correcting is no longer tenable. Markets do eventually correct but, if allowed free rein, can get so far out of line that the corrections take the form of collapses that can be very painful. 
  • The crisis has highlighted the importance of the government's role in regulating markets to make them function smoothly. At the same time, the government is equally capable of mucking up markets—even well-meaning governments, in the name of improving social welfare (e.g., making housing affordable for everyone), can often create perverse incentives that only foment more instability.
  • In short, the crisis has brought to the fore the complex connections among markets, government and social and economic policies.
  • ...2 more annotations...
  • The crisis has injected a good dose of humility into the discipline (at least in most corners) and made it much more challenging to teach economics because our existing models are simply too rudimentary to capture all of these connections
  • It is has become even more important to emphasise what economic theory can in fact give us—powerful but narrow insights that should guide our thinking but not dominate them. 
Javier E

Op-Ed Contributor - Rich Man's Burden - Op-Ed - NYTimes.com - 0 views

  • what’s different from Weber’s era is that it is now the rich who are the most stressed out and the most likely to be working the most. Perhaps for the first time since we’ve kept track of such things, higher-income folks work more hours than lower-wage earners do.
  • This is a stunning moment in economic history: At one time we worked hard so that someday we (or our children) wouldn’t have to. Today, the more we earn, the more we work, since the opportunity cost of not working is all the greater (and since the higher we go, the more relatively deprived we feel).
  • when we get a raise, instead of using that hard-won money to buy “the good life,” we feel even more pressure to work since the shadow costs of not working are all the greater.
  • ...8 more annotations...
  • One of these forces is America’s income inequality, which has steadily increased since 1969. We typically think of this process as one in which the rich get richer and the poor get poorer. Surely, that should, if anything, make upper income earners able to relax.
  • technology both creates and reflects economic realities. Instead, less visible forces have given birth to this state of affairs.
  • even with the same work hours and household duties, women with higher incomes report feeling more stressed than women with lower incomes, according to a recent study by the economists Daniel Hamermesh and Jungmin Lee. In other words, not only does more money not solve our problems at home, it may even make things worse.
  • t it turns out that the growing disparity is really between the middle and the top. If we divided the American population in half, we would find that those in the lower half have been pretty stable over the last few decades in terms of their incomes relative to one another. However, the top half has been stretching out like taffy. In fact, as we move up the ladder the rungs get spaced farther and farther apart.
  • The result of this high and rising inequality is what I call an “economic red shift.” Like the shift in the light spectrum caused by the galaxies rushing away, those Americans who are in the top half of the income distribution experience a sensation that, while they may be pulling away from the bottom half, they are also being left further and further behind by those just above them.
  • since inequality rises exponentially the higher you climb the economic ladder, the better off you are in absolute terms, the more relatively deprived you may feel. In fact, a poll of New Yorkers found that those who earned more than $200,000 a year were the most likely of any income group to agree that “seeing other people with money” makes them feel poor.
  • Because these forces drive each other, they trap us in a vicious cycle: Rising inequality causes us to work more to keep up in an economy increasingly dominated by status goods. That further widens income differences.
  • if you are someone who is pretty well off but couldn’t stop working yesterday nonetheless, don’t blame your iPhone or laptop. Blame a new wrinkle in something much more antiquated: inequality.
Javier E

The Power of Nudges, for Good and Bad - The New York Times - 0 views

  • Nudges, small design changes that can markedly affect individual behavior, have been catching on. These techniques rely on insights from behavioral science
  • when used ethically, they can be very helpful. But we need to be sure that they aren’t being employed to sway people to make bad decisions that they will later regret.
  • Three principles should guide the use of nudges:■ All nudging should be transparent and never misleading.■ It should be as easy as possible to opt out of the nudge, preferably with as little as one mouse click.■ There should be good reason to believe that the behavior being encouraged will improve the welfare of those being nudged.
  • ...6 more annotations...
  • the government teams in Britain and the United States that have focused on nudging have followed these guidelines scrupulously.
  • the private sector is another matter. In this domain, I see much more troubling behavior.
  • Many companies are nudging purely for their own profit and not in customers’ best interests. In a recent column in The New York Times, Robert Shiller called such behavior “phishing.” Mr. Shiller and George Akerlof, both Nobel-winning economists, have written a book on the subject, “Phishing for Phools.”
  • Some argue that phishing — or evil nudging — is more dangerous in government than in the private sector. The argument is that government is a monopoly with coercive power, while we have more choice in the private sector over which newspapers we read and which airlines we fly.
  • I think this distinction is overstated. In a democracy, if a government creates bad policies, it can be voted out of office. Competition in the private sector, however, can easily work to encourage phishing rather than stifle it.
  • One example is the mortgage industry in the early 2000s. Borrowers were encouraged to take out loans that they could not repay when real estate prices fell. Competition did not eliminate this practice, because it was hard for anyone to make money selling the advice “Don’t take that loan.”
kushnerha

There's nothing wrong with grade inflation - The Washington Post - 0 views

  • By the early ’90s, so long as one had the good sense to major in the humanities — all bets were off in the STEM fields — it was nearly impossible to get a final grade below a B-minus at an elite college. According to a 2012 study, the average college GPA, which in the 1930s was a C-plus, had risen to a B at public universities and a B-plus at private schools. At Duke, Pomona and Harvard, D’s and F’s combine for just 2 percent of all grades. A Yale report found that 62 percent of all Yale grades are A or A-minus. According to a 2013 article in the Harvard Crimson, the median grade at Harvard was an A-minus , while the most common grade was an A.
  • The result is widespread panic about grade inflation at elite schools. (The phenomenon is not as prevalent at community colleges and less-selective universities.) Some blame students’ consumer mentality, a few see a correlation with small class sizes (departments with falling enrollments want to keep students happy), and many cite a general loss of rigor in a touchy-feely age.
  • Yet whenever elite schools have tried to fight grade inflation, it’s been a mess. Princeton instituted strict caps on the number of high grades awarded, then abandoned the plan, saying the caps dissuaded applicants and made students miserable. At Wellesley, grade-inflated humanities departments mandated that the average result in their introductory and intermediate classes not exceed a B-plus. According to one study, enrollment fell by one-fifth, and students were 30 percent less likely to major in one of these subjects.
  • ...12 more annotations...
  • I liked the joy my students found when they actually earned a grade they’d been reaching for. But whereas I once thought we needed to contain grades, I now see that we may as well let them float skyward. If grade inflation is bad, fighting it is worse. Our goal should be ending the centrality of grades altogether. For years, I feared that a world of only A’s would mean the end of meaningful grades; today, I’m certain of it. But what’s so bad about that?
  • It’s easy to see why schools want to fight grade inflation. Grades should motivate certain students: those afraid of the stigma of a bad grade or those ambitious, by temperament or conditioning, to succeed in measurable ways. Periodic grading during a term, on quizzes, tests or papers, provides feedback to students, which should enable them to do better. And grades theoretically signal to others, such as potential employers or graduate schools, how well the student did. (Grade-point averages are also used for prizes and class rankings, though that doesn’t strike me as an important feature.)
  • But it’s not clear that grades work well as motivators. Although recent research on the effects of grades is limited, several studies in the 1970s, 1980s and 1990s measured how students related to a task or a class when it was graded compared to when it was ungraded. Overall, graded students are less interested in the topic at hand and — and, for obvious, common-sense reasons — more inclined to pick the easiest possible task when given the chance. In the words of progressive-education theorist Alfie Kohn, author of “The Homework Myth,” “the quality of learning declines” when grades are introduced, becoming “shallower and more superficial when the point is to get a grade.”
  • Even where grades can be useful, as in describing what material a student has mastered, they are remarkably crude instruments. Yes, the student who gets a 100 on a calculus exam probably grasps the material better than the student with a 60 — but only if she retains the knowledge, which grades can’t show.
  • I still can’t say very well what separates a B from an A. What’s more, I never see the kind of incompetence or impudence that would merit a D or an F. And now, in our grade-inflated world, it’s even harder to use grades to motivate, or give feedback, or send a signal to future employers or graduate schools.
  • According to a 2012 study by the Chronicle of Higher Education, GPA was seventh out of eight factors employers considered in hiring, behind internships, extracurricular activities and previous employment. Last year, Stanford’s registrar told the Chronicle about “a clamor” from employers “for something more meaningful” than the traditional transcript. The Lumina Foundation gave a$1.27 million grant to two organizations for college administrators working to develop better student records, with grades only one part of a student’s final profile.
  • Some graduate schools, too, have basically ditched grades. “As long as you don’t bomb and flunk out, grades don’t matter very much in M.F.A. programs,” the director of one creative-writing program told the New York Times. To top humanities PhD programs, letters of reference and writing samples matter more than overall GPA (although students are surely expected to have received good grades in their intended areas of study). In fact, it’s impossible to get into good graduate or professional schools without multiple letters of reference, which have come to function as the kind of rich, descriptive comments that could go on transcripts in place of grades.
  • suggests that GPAs serve not to validate students from elite schools but to keep out those from less-prestigious schools and large public universities, where grades are less inflated. Grades at community colleges “have actually dropped” over the years, according to Stuart Rojstaczer, a co-author of the 2012 grade-inflation study. That means we have two systems: one for students at elite schools, who get jobs based on references, prestige and connections, and another for students everywhere else, who had better maintain a 3.0. Grades are a tool increasingly deployed against students without prestige.
  • The trouble is that, while it’s relatively easy for smaller colleges to go grade-free, with their low student-to-teacher ratios, it’s tough for professors at larger schools, who must evaluate more students, more quickly, with fewer resources. And adjuncts teaching five classes for poverty wages can’t write substantial term-end comments, so grades are a necessity if they want to give any feedback at all.
  • It would mean hiring more teachers and paying them better (which schools should do anyway). And if transcripts become more textured, graduate-school admission offices and employers will have to devote more resources to reading them, and to getting to know applicants through interviews and letters of reference — a salutary trend that is underway already.
  • When I think about getting rid of grades, I think of happier students, with whom I have more open, democratic relationships. I think about being forced to pay more attention to the quiet ones, since I’ll have to write something truthful about them, too. I’ve begun to wonder if a world without grades may be one of those states of affairs (like open marriages, bicycle lanes and single-payer health care) that Americans resist precisely because they seem too good, suspiciously good. Nothing worth doing is supposed to come easy.
  • Alfie Kohn, too, sees ideology at work in the grade-inflation panic. “Most of what powers the arguments against grade inflation is a very right-wing idea that excellence consists in beating everyone else around you,” he says. “Even when you have sorted them — even when they get to Harvard! — we have to sort them again.” In other words, we can trust only a system in which there are clear winners and losers.
Javier E

Opinion | Knowledge, Ignorance and Climate Change - The New York Times - 1 views

  • the value of being aware of our ignorance has been a recurring theme in Western thought: René Descartes said it’s necessary to doubt all things to build a solid foundation for science; and Ludwig Wittgenstein, reflecting on the limits of language, said that “the difficulty in philosophy is to say no more than we know.”
  • Sometimes, when it appears that someone is expressing doubt, what he is really doing is recommending a course of action. For example, if I tell you that I don’t know whether there is milk in the fridge, I’m not exhibiting philosophical wisdom — I’m simply recommending that you check the fridge before you go shopping.
  • According to NASA, at least 97 percent of actively publishing climate scientists think that “climate-warming trends over the past century are extremely likely caused by human activities.”
  • ...14 more annotations...
  • As a philosopher, I have nothing to add to the scientific evidence of global warming, but I can tell you how it’s possible to get ourselves to sincerely doubt things, despite abundant evidence to the contrary
  • scenarios suggest that it’s possible to feel as though you don’t know something even when possessing enormous evidence in its favor. Philosophers call scenarios like these “skeptical pressure” cases
  • In general, a skeptical pressure case is a thought experiment in which the protagonist has good evidence for something that he or she believes, but the reader is reminded that the protagonist could have made a mistake
  • If the story is set up in the right way, the reader will be tempted to think that the protagonist’s belief isn’t genuine knowledge
  • When presented with these thought experiments, some philosophy students conclude that what these examples show is that knowledge requires full-blown certainty. In these skeptical pressure cases, the evidence is overwhelming, but not 100 percent. It’s an attractive idea, but it doesn’t sit well with the fact that we ordinarily say we know lots of things with much lower probability.
  • Although there is no consensus about how it arises, a promising idea defended by the philosopher David Lewis is that skeptical pressure cases often involve focusing on the possibility of error. Once we start worrying and ruminating about this possibility, no matter how far-fetched, something in our brains causes us to doubt. The philosopher Jennifer Nagel aptly calls this type of effect “epistemic anxiety.”
  • In my own work, I have speculated that an extreme version of this phenomenon is operative in obsessive compulsive disorder
  • The standard response by climate skeptics is a lot like our reaction to skeptical pressure cases. Climate skeptics understand that 97 percent of scientists disagree with them, but they focus on the very tiny fraction of holdouts. As in the lottery case, this focus might be enough to sustain their skepticism.
  • Anti-vaccine proponents, for example, aware that medical professionals disagree with their position, focus on any bit of fringe research that might say otherwise.
  • Skeptical allure can be gripping. Piling on more evidence does not typically shake you out of it, just as making it even more probable that you will lose the lottery does not all of a sudden make you feel like you know your ticket is a loser.
  • One way to counter the effects of skepticism is to stop talking about “knowledge” and switch to talking about probabilities. Instead of saying that you don’t know some claim, try to estimate the probability that it is true. As hedge fund managers, economists, policy researchers, doctors and bookmakers have long been aware, the way to make decisions while managing risk is through probabilities.
  • Once we switch to this perspective, claims to “not know,” like those made by Trump, lose their force and we are pushed to think more carefully about the existing data and engage in cost-benefit analyses.
  • It’s easy to say you don’t know, but it’s harder to commit to an actual low probability estimate in the face of overwhelming contrary evidence.
  • Socrates was correct that awareness of one’s ignorance is virtuous, but philosophers have subsequently uncovered many pitfalls associated with claims of ignorance. An appreciation of these issues can help elevate public discourse on important topics, including the future of our planet.
Javier E

Opinion | Is There Such a Thing as an Authoritarian Voter? - The New York Times - 0 views

  • Jonathan Weiler, a political scientist at the University of North Carolina at Chapel Hill, has spent much of his career studying the appeal of authoritarian figures: politicians who preach xenophobia, beat up on the press and place themselves above the law while extolling “law and order” for everyone else.
  • He is one of many scholars who believe that deep-seated psychological traits help explain voters’ attraction to such leaders. “These days,” he told me, “audiences are more receptive to the idea” than they used to be.
  • “In 2018, the sense of fear and panic — the disorientation about how people who are not like us could see the world the way they do — it’s so elemental,” Mr. Weiler said. “People understand how deeply divided we are, and they are looking for explanations that match the depth of that division.”
  • ...24 more annotations...
  • Moreover, using the child-rearing questionnaire, African-Americans score as far more authoritarian than whites
  • what, exactly, is an “authoritarian” personality? How do you measure it?
  • for more than half a century — social scientists have tried to figure out why some seemingly mild-mannered people gravitate toward a strongman
  • the philosopher (and German refugee) Theodor Adorno collaborated with social scientists at the University of California at Berkeley to investigate why ordinary people supported fascist, anti-Semitic ideology during the war. They used a questionnaire called the F-scale (F is for fascism) and follow-up interviews to analyze the “total personality” of the “potentially antidemocratic individual.”
  • The resulting 1,000-page tome, “The Authoritarian Personality,” published in 1950, found that subjects who scored high on the F-scale disdained the weak and marginalized. They fixated on sexual deviance, embraced conspiracy theories and aligned themselves with domineering leaders “to serve powerful interests and so participate in their power,”
  • “Globalized free trade has shafted American workers and left us looking for a strong male leader, a ‘real man,’” he wrote. “Trump offers exactly what my maladapted unconscious most craves.”
  • one of the F-scale’s prompts: “Obedience and respect for authority are the most important virtues children should learn.” Today’s researchers often diagnose latent authoritarians through a set of questions about preferred traits in children: Would you rather your child be independent or have respect for elders? Have curiosity or good manners? Be self-reliant or obedient? Be well behaved or considerate?
  • a glance at the Christian group Focus on the Family’s “biblical principles for spanking” reminds us that your approach to child rearing is not pre-political; it is shorthand for your stance in the culture wars.
  • “All the social sciences are brought to bear to try to explain all the evil that persists in the world, even though the liberal Enlightenment worldview says that we should be able to perfect things,” said Mr. Strouse, the Trump voter
  • what should have been obvious:
  • “Trump’s electoral strength — and his staying power — have been buoyed, above all, by Americans with authoritarian inclinations,” wrote Matthew MacWilliams, a political consultant who surveyed voters during the 2016 election
  • The child-trait test, then, is a tool to identify white people who are anxious about their decline in status and power.
  • new book, “Prius or Pickup?,” by ditching the charged term “authoritarian.” Instead, they divide people into three temperamental camps: fixed (people who are wary of change and “set in their ways”), fluid (those who are more open to new experiences and people) and mixed (those who are ambivalent).
  • “The term ‘authoritarian’ connotes a fringe perspective, and the perspective we’re describing is far from fringe,” Mr. Weiler said. “It’s central to American public opinion, especially on cultural issues like immigration and race.”
  • Other scholars apply a typology based on the “Big Five” personality traits identified by psychologists in the mid-20th century: extroversion, agreeableness, conscientiousness, neuroticism and openness to experience. (It seems that liberals are open but possibly neurotic, while conservatives are more conscientious.)
  • Historical context matters — it shapes who we are and how we debate politics. “Reason moves slowly,” William English, a political economist at Georgetown, told me. “It’s constituted sociologically, by deep community attachments, things that change over generations.”
  • “it is a deep-seated aspiration of many social scientists — sometimes conscious and sometimes unconscious — to get past wishy-washy culture and belief. Discourses that can’t be scientifically reduced are problematic” for researchers who want to provide “a universal account of behavior.”
  • in our current environment, where polarization is so unyielding, the apparent clarity of psychological and biological explanations becomes seductive
  • Attitudes toward parenting vary across cultures, and for centuries African-Americans have seen the consequences of a social and political hierarchy arrayed against them, so they can hardly be expected to favor it — no matter what they think about child rearing
  • — we know that’s not going to happen. People have wicked tendencies.”
  • as the social scientific portrait of humanity grows more psychological and irrational, it comes closer and closer to approximating the old Adam of traditional Christianity: a fallen, depraved creature, unable to see himself clearly except with the aid of a higher power
  • The conclusions of political scientists should inspire humility rather than hubris. In the end, they have confirmed what so many observers of our species have long suspected: None of us are particularly free or rational creatures.
  • Allen Strouse is not the archetypal Trump voter whom journalists discover in Rust Belt diners. He is a queer Catholic poet and scholar of medieval literature who teaches at the New School in New York City. He voted for Mr. Trump “as a protest against the Democrats’ failures on economic issues,” but the psychological dimensions of his vote intrigue him. “Having studied Freudian analysis, and being in therapy for 10 years, I couldn’t not reflexively ask myself, ‘How does this decision have to do with my psychology?’” he told me.
  • their preoccupation with childhood and “primitive and irrational wishes and fears” have influenced the study of authoritarianism ever since.
aliciathompson1

Can economics be ethical? | Prospect Magazine - 2 views

  • Recent debates about the economy have rediscovered the question, “is that right?”, where “right” means more than just profits or efficiency.
  • Some argue that because free markets allow for personal choice, they are already ethical. Others have accepted the ethical critique and embraced corporate social responsibility.
  • Most radical of all are the ethical systems that reject the market completely. Marxists, some feminists and a few Buddhist approaches to economics take this line: their ethics dispute the starting points of classical market economics—ideas like individual consumer sovereignty, private property and the attractiveness of material wealth. They conclude that to be ethical, an individual should withdraw from the market entirely, or even actively disrupt it.
  • ...1 more annotation...
  • These human quirks mean we can never make purely “rational” decisions. A new wave of behavioural economists, aided by neuroscientists, is trying to understand our psychology, both alone and in groups, so they can anticipate our decisions in the marketplace more accurately.
Javier E

The Tech Industry's Psychological War on Kids - Member Feature Stories - Medium - 0 views

  • she cried, “They took my f***ing phone!” Attempting to engage Kelly in conversation, I asked her what she liked about her phone and social media. “They make me happy,” she replied.
  • Even though they were loving and involved parents, Kelly’s mom couldn’t help feeling that they’d failed their daughter and must have done something terribly wrong that led to her problems.
  • My practice as a child and adolescent psychologist is filled with families like Kelly’s. These parents say their kids’ extreme overuse of phones, video games, and social media is the most difficult parenting issue they face — and, in many cases, is tearing the family apart.
  • ...88 more annotations...
  • What none of these parents understand is that their children’s and teens’ destructive obsession with technology is the predictable consequence of a virtually unrecognized merger between the tech industry and psychology.
  • Dr. B.J. Fogg, is a psychologist and the father of persuasive technology, a discipline in which digital machines and apps — including smartphones, social media, and video games — are configured to alter human thoughts and behaviors. As the lab’s website boldly proclaims: “Machines designed to change humans.”
  • These parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit.
  • psychology — a discipline that we associate with healing — is now being used as a weapon against children.
  • This alliance pairs the consumer tech industry’s immense wealth with the most sophisticated psychological research, making it possible to develop social media, video games, and phones with drug-like power to seduce young users.
  • Likewise, social media companies use persuasive design to prey on the age-appropriate desire for preteen and teen kids, especially girls, to be socially successful. This drive is built into our DNA, since real-world relational skills have fostered human evolution.
  • Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”
  • Persuasive technology (also called persuasive design) works by deliberately creating digital environments that users feel fulfill their basic human drives — to be social or obtain goals — better than real-world alternatives.
  • Kids spend countless hours in social media and video game environments in pursuit of likes, “friends,” game points, and levels — because it’s stimulating, they believe that this makes them happy and successful, and they find it easier than doing the difficult but developmentally important activities of childhood.
  • While persuasion techniques work well on adults, they are particularly effective at influencing the still-maturing child and teen brain.
  • “Video games, better than anything else in our culture, deliver rewards to people, especially teenage boys,” says Fogg. “Teenage boys are wired to seek competency. To master our world and get better at stuff. Video games, in dishing out rewards, can convey to people that their competency is growing, you can get better at something second by second.”
  • it’s persuasive design that’s helped convince this generation of boys they are gaining “competency” by spending countless hours on game sites, when the sad reality is they are locked away in their rooms gaming, ignoring school, and not developing the real-world competencies that colleges and employers demand.
  • Persuasive technologies work because of their apparent triggering of the release of dopamine, a powerful neurotransmitter involved in reward, attention, and addiction.
  • As she says, “If you don’t get 100 ‘likes,’ you make other people share it so you get 100…. Or else you just get upset. Everyone wants to get the most ‘likes.’ It’s like a popularity contest.”
  • there are costs to Casey’s phone obsession, noting that the “girl’s phone, be it Facebook, Instagram or iMessage, is constantly pulling her away from her homework, sleep, or conversations with her family.
  • Casey says she wishes she could put her phone down. But she can’t. “I’ll wake up in the morning and go on Facebook just… because,” she says. “It’s not like I want to or I don’t. I just go on it. I’m, like, forced to. I don’t know why. I need to. Facebook takes up my whole life.”
  • B.J. Fogg may not be a household name, but Fortune Magazine calls him a “New Guru You Should Know,” and his research is driving a worldwide legion of user experience (UX) designers who utilize and expand upon his models of persuasive design.
  • “No one has perhaps been as influential on the current generation of user experience (UX) designers as Stanford researcher B.J. Fogg.”
  • the core of UX research is about using psychology to take advantage of our human vulnerabilities.
  • As Fogg is quoted in Kosner’s Forbes article, “Facebook, Twitter, Google, you name it, these companies have been using computers to influence our behavior.” However, the driving force behind behavior change isn’t computers. “The missing link isn’t the technology, it’s psychology,” says Fogg.
  • UX researchers not only follow Fogg’s design model, but also his apparent tendency to overlook the broader implications of persuasive design. They focus on the task at hand, building digital machines and apps that better demand users’ attention, compel users to return again and again, and grow businesses’ bottom line.
  • the “Fogg Behavior Model” is a well-tested method to change behavior and, in its simplified form, involves three primary factors: motivation, ability, and triggers.
  • “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.”
  • Regarding ability, Fogg suggests that digital products should be made so that users don’t have to “think hard.” Hence, social networks are designed for ease of use
  • Finally, Fogg says that potential users need to be triggered to use a site. This is accomplished by a myriad of digital tricks, including the sending of incessant notifications
  • moral questions about the impact of turning persuasive techniques on children and teens are not being asked. For example, should the fear of social rejection be used to compel kids to compulsively use social media? Is it okay to lure kids away from school tasks that demand a strong mental effort so they can spend their lives on social networks or playing video games that don’t make them think much at all?
  • Describing how his formula is effective at getting people to use a social network, the psychologist says in an academic paper that a key motivator is users’ desire for “social acceptance,” although he says an even more powerful motivator is the desire “to avoid being socially rejected.”
  • the startup Dopamine Labs boasts about its use of persuasive techniques to increase profits: “Connect your app to our Persuasive AI [Artificial Intelligence] and lift your engagement and revenue up to 30% by giving your users our perfect bursts of dopamine,” and “A burst of Dopamine doesn’t just feel good: it’s proven to re-wire user behavior and habits.”
  • Ramsay Brown, the founder of Dopamine Labs, says in a KQED Science article, “We have now developed a rigorous technology of the human mind, and that is both exciting and terrifying. We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behavior in ways that, unbeknownst to them, feel second-nature but are really by design.”
  • Programmers call this “brain hacking,” as it compels users to spend more time on sites even though they mistakenly believe it’s strictly due to their own conscious choices.
  • Banks of computers employ AI to “learn” which of a countless number of persuasive design elements will keep users hooked
  • A persuasion profile of a particular user’s unique vulnerabilities is developed in real time and exploited to keep users on the site and make them return again and again for longer periods of time. This drives up profits for consumer internet companies whose revenue is based on how much their products are used.
  • “The leaders of Internet companies face an interesting, if also morally questionable, imperative: either they hijack neuroscience to gain market share and make large profits, or they let competitors do that and run away with the market.”
  • Social media and video game companies believe they are compelled to use persuasive technology in the arms race for attention, profits, and survival.
  • Children’s well-being is not part of the decision calculus.
  • one breakthrough occurred in 2017 when Facebook documents were leaked to The Australian. The internal report crafted by Facebook executives showed the social network boasting to advertisers that by monitoring posts, interactions, and photos in real time, the network is able to track when teens feel “insecure,” “worthless,” “stressed,” “useless” and a “failure.”
  • The report also bragged about Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.”
  • These design techniques provide tech corporations a window into kids’ hearts and minds to measure their particular vulnerabilities, which can then be used to control their behavior as consumers. This isn’t some strange future… this is now.
  • The official tech industry line is that persuasive technologies are used to make products more engaging and enjoyable. But the revelations of industry insiders can reveal darker motives.
  • Revealing the hard science behind persuasive technology, Hopson says, “This is not to say that players are the same as rats, but that there are general rules of learning which apply equally to both.”
  • After penning the paper, Hopson was hired by Microsoft, where he helped lead the development of the Xbox Live, Microsoft’s online gaming system
  • “If game designers are going to pull a person away from every other voluntary social activity or hobby or pastime, they’re going to have to engage that person at a very deep level in every possible way they can.”
  • This is the dominant effect of persuasive design today: building video games and social media products so compelling that they pull users away from the real world to spend their lives in for-profit domains.
  • Persuasive technologies are reshaping childhood, luring kids away from family and schoolwork to spend more and more of their lives sitting before screens and phones.
  • “Since we’ve figured to some extent how these pieces of the brain that handle addiction are working, people have figured out how to juice them further and how to bake that information into apps.”
  • Today, persuasive design is likely distracting adults from driving safely, productive work, and engaging with their own children — all matters which need urgent attention
  • Still, because the child and adolescent brain is more easily controlled than the adult mind, the use of persuasive design is having a much more hurtful impact on kids.
  • But to engage in a pursuit at the expense of important real-world activities is a core element of addiction.
  • younger U.S. children now spend 5 ½ hours each day with entertainment technologies, including video games, social media, and online videos.
  • Even more, the average teen now spends an incredible 8 hours each day playing with screens and phones
  • U.S. kids only spend 16 minutes each day using the computer at home for school.
  • Quietly, using screens and phones for entertainment has become the dominant activity of childhood.
  • Younger kids spend more time engaging with entertainment screens than they do in school
  • teens spend even more time playing with screens and phones than they do sleeping
  • kids are so taken with their phones and other devices that they have turned their backs to the world around them.
  • many children are missing out on real-life engagement with family and school — the two cornerstones of childhood that lead them to grow up happy and successful
  • persuasive technologies are pulling kids into often toxic digital environments
  • A too frequent experience for many is being cyberbullied, which increases their risk of skipping school and considering suicide.
  • And there is growing recognition of the negative impact of FOMO, or the fear of missing out, as kids spend their social media lives watching a parade of peers who look to be having a great time without them, feeding their feelings of loneliness and being less than.
  • The combined effects of the displacement of vital childhood activities and exposure to unhealthy online environments is wrecking a generation.
  • as the typical age when kids get their first smartphone has fallen to 10, it’s no surprise to see serious psychiatric problems — once the domain of teens — now enveloping young kids
  • Self-inflicted injuries, such as cutting, that are serious enough to require treatment in an emergency room, have increased dramatically in 10- to 14-year-old girls, up 19% per year since 2009.
  • While girls are pulled onto smartphones and social media, boys are more likely to be seduced into the world of video gaming, often at the expense of a focus on school
  • it’s no surprise to see this generation of boys struggling to make it to college: a full 57% of college admissions are granted to young women compared with only 43% to young men.
  • Economists working with the National Bureau of Economic Research recently demonstrated how many young U.S. men are choosing to play video games rather than join the workforce.
  • The destructive forces of psychology deployed by the tech industry are making a greater impact on kids than the positive uses of psychology by mental health providers and child advocates. Put plainly, the science of psychology is hurting kids more than helping them.
  • Hope for this wired generation has seemed dim until recently, when a surprising group has come forward to criticize the tech industry’s use of psychological manipulation: tech executives
  • Tristan Harris, formerly a design ethicist at Google, has led the way by unmasking the industry’s use of persuasive design. Interviewed in The Economist’s 1843 magazine, he says, “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”
  • Marc Benioff, CEO of the cloud computing company Salesforce, is one of the voices calling for the regulation of social media companies because of their potential to addict children. He says that just as the cigarette industry has been regulated, so too should social media companies. “I think that, for sure, technology has addictive qualities that we have to address, and that product designers are working to make those products more addictive, and we need to rein that back as much as possible,”
  • “If there’s an unfair advantage or things that are out there that are not understood by parents, then the government’s got to come forward and illuminate that.”
  • Since millions of parents, for example the parents of my patient Kelly, have absolutely no idea that devices are used to hijack their children’s minds and lives, regulation of such practices is the right thing to do.
  • Another improbable group to speak out on behalf of children is tech investors.
  • How has the consumer tech industry responded to these calls for change? By going even lower.
  • Facebook recently launched Messenger Kids, a social media app that will reach kids as young as five years old. Suggestive that harmful persuasive design is now honing in on very young children is the declaration of Messenger Kids Art Director, Shiu Pei Luu, “We want to help foster communication [on Facebook] and make that the most exciting thing you want to be doing.”
  • the American Psychological Association (APA) — which is tasked with protecting children and families from harmful psychological practices — has been essentially silent on the matter
  • APA Ethical Standards require the profession to make efforts to correct the “misuse” of the work of psychologists, which would include the application of B.J. Fogg’s persuasive technologies to influence children against their best interests
  • Manipulating children for profit without their own or parents’ consent, and driving kids to spend more time on devices that contribute to emotional and academic problems is the embodiment of unethical psychological practice.
  • “Never before in history have basically 50 mostly men, mostly 20–35, mostly white engineer designer types within 50 miles of where we are right now [Silicon Valley], had control of what a billion people think and do.”
  • Some may argue that it’s the parents’ responsibility to protect their children from tech industry deception. However, parents have no idea of the powerful forces aligned against them, nor do they know how technologies are developed with drug-like effects to capture kids’ minds
  • Others will claim that nothing should be done because the intention behind persuasive design is to build better products, not manipulate kids
  • similar circumstances exist in the cigarette industry, as tobacco companies have as their intention profiting from the sale of their product, not hurting children. Nonetheless, because cigarettes and persuasive design predictably harm children, actions should be taken to protect kids from their effects.
  • in a 1998 academic paper, Fogg describes what should happen if things go wrong, saying, if persuasive technologies are “deemed harmful or questionable in some regard, a researcher should then either take social action or advocate that others do so.”
  • I suggest turning to President John F. Kennedy’s prescient guidance: He said that technology “has no conscience of its own. Whether it will become a force for good or ill depends on man.”
  • The APA should begin by demanding that the tech industry’s behavioral manipulation techniques be brought out of the shadows and exposed to the light of public awareness
  • Changes should be made in the APA’s Ethics Code to specifically prevent psychologists from manipulating children using digital machines, especially if such influence is known to pose risks to their well-being.
  • Moreover, the APA should follow its Ethical Standards by making strong efforts to correct the misuse of psychological persuasion by the tech industry and by user experience designers outside the field of psychology.
  • It should join with tech executives who are demanding that persuasive design in kids’ tech products be regulated
  • The APA also should make its powerful voice heard amongst the growing chorus calling out tech companies that intentionally exploit children’s vulnerabilities.
Javier E

The Disturbing New Facts About American Capitalism - WSJ - 0 views

  • “Let your winners run” is one of the oldest adages in investing. One of the newest ideas is that the winners may be running away with everything.
  • Modern capitalism is built on the idea that as companies get big, they become fat and happy, opening themselves up to lean and hungry competitors that can underprice and overtake them. That cycle of creative destruction may be changing in ways that help explain the seemingly unstoppable rise of the stock market.
  • U.S. companies are moving toward a winner-take-all system in which giants get stronger, not weaker, as they expand.
  • ...12 more annotations...
  • That’s the latest among several recent studies by economists working independently, all arriving at similar findings: A few “superstar firms” have grown to dominate their industries, crowding out competitors and controlling markets to a degree not seen in many decades.
  • Let’s look beyond such obvious winner-take-all examples as Apple or Alphabet, the parent of Google.
  • Consider real-estate services. In 1997, according to Profs. Grullon, Larkin and Michaely, that sector had 42 publicly traded companies; the four largest generated 49% of the group’s total revenue. By 2014, only 20 public firms were left, and the top four— CBRE Group, Jones Lang LaSalle, Realogy Holdings and Wyndham Worldwide—commanded 78% of the group’s combined revenue.
  • Or look at supermarkets. In 1997, there were 36 publicly traded companies in that industry, with the top four accounting for more than half of total sales. By 2014, only 11 were left. The top four—Kroger, Supervalu, Whole Foods Market and Roundy’s (since acquired by Kroger)—held 89% of the pie.
  • The U.S. had more than 7,000 public companies 20 years ago, the professors say; nowadays, it’s fewer than 4,000.
  • The winners are also grabbing most of the profits
  • At the end of 1996, the 25 companies in the S&P 500 with the highest net profit margins—income as a percentage of revenue—earned a median of just under 21 cents on every dollar of sales. Last year, the top 25 such companies earned a median of 39 cents on the dollar.
  • Two decades ago, the median net margin among all S&P 500 members was 6.7%. By the end of 2016, that had increased to 9.7%.
  • So while companies as a whole became more profitable over the past 20 years, the winners have become vastly more profitable, nearly doubling the gains they got on each dollar of sales.
  • Why might it be easier now for winners to take all? Prof. Michaely suggests two theories. Declining enforcement of antitrust rules has led to bigger mergers, less competition and higher profits.
  • The other is technology. “If you want to compete with Google or Amazon,” he says, “you’ll have to invest not just billions, but tens of billions of dollars.”
  • Still, history offers a warning. Many times in the past, winners have taken all but seldom for long.
Javier E

The Facebooking of Economics - NYTimes.com - 2 views

  • there has been a major erosion of the old norms. It used to be the case that to have a role in the economics discourse you had to have formal credentials and a position of authority; you had to be a tenured professor at a top school publishing in top journals, or a senior government official. Today the ongoing discourse, especially in macroeconomics, is much more free-form.
  • you don’t get to play a major role in that discourse by publishing clever Slateish snark; you get there by saying smart things backed by data.
  • Economics journals stopped being a way to communicate ideas at least 25 years ago, replaced by working papers; publication was more about certification for the purposes of tenure than anything else.
  • ...4 more annotations...
  • at this point the real discussion in macro, and to a lesser extent in other fields, is taking place in the econoblogosphere. This is true even for research done at official institutions like the IMF and the Fed
  • How does the econoblogosphere work? It’s a lot like the 17th-century coffee shop culture Tom Standage describes in his lovely book Writing on the Wall. People with shared interests in effect meet in cyberspace (although many of them are, as it happens, also sitting in real coffee shops at the time, as I am now), exchange ideas, write them up, and make those writeups available to others when they think they’re especially interesting.
  • who are the players in this world? Well, look at any of the various rankings of economics blogs — say, the one at Onalytica. I don’t see any of Brooks’s Thought Leaders there. I see a lot of solid professional economists; a number of equally solid economic journalists; and a few people who don’t fall into standard categories
  • Does this new, amorphous system work? Yes!
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

For Trump and G.O.P., the Welfare State Shouldn't Be the Enemy - The New York Times - 0 views

  • Historically, however, the level of government spending and the level of regulation have been packaged together and treated as a single variable. This has forced a choice between two options: the “liberal” package of big government and heavy regulation or the “conservative” package of small government and light regulation.
  • But this is a false choice. Regulatory policy and fiscal policy are independent dimensions, and they can be rebundled in different packages. Mr. Trump’s gestures toward a big-government, low-regulation package — rooted more in instinct than intellect — proved popular with Republican voters
  • Government spending reliably rises as economies grow. When countries get richer, one of the first things their people do is vote for more generous government social services. This pattern, which economists have labeled Wagner’s Law, has held more or less steady for a century in dozens of developed democratic countries.
  • ...5 more annotations...
  • Republicans need to recognize finally that secure property rights, openness to global trade and a relatively low regulatory burden are much more important than fiscal policy for innovation, job creation and rising standards of
  • not only are sound safety nets popular, but they also increase the public’s tolerance for the dislocations of a dynamic free-market economy
  • Third, the idea that reducing taxpayer-financed government spending is the key to giving people more freedom and revving up the economy encourages conservative hostility to government as such
  • The Republican legislative agenda is stalled because party members have boxed themselves in with their own bad ideas about what freedom and rising prosperity require. A new pro-growth economic platform that sets aside small-government monomania and focuses instead on protecting citizens’ basic rights to commit “capitalist acts between consenting adults,” as the libertarian philosopher Robert Nozick put it, has both practical and political advantages
  • a generous and effective safety net can be embraced as a tool to promote and sustain a culture of freedom, innovation and risk taking. Politically, repairing and improving the slipshod infrastructure of the safety net would liberate Republicans from the bad faith of attacking the welfare state in one breath, halfheartedly promising not to cut entitlements in the next and then breaking that promise once in power.
Javier E

False consciousness - 0 views

  • Marx’s works, including “The Communist Manifesto”, written with Friedrich Engels in 1848, may have had more impact on the modern world than many suppose. Of the manifesto’s ten principal demands, perhaps four have been met in many rich countries, including “free education for all children in public schools” and a “progressive or graduated income tax”.
  • Mr Stedman Jones’s book is above all an intellectual biography, which focuses on the philosophical and political context in which Marx wrote.
  • Marx did not invent communism. Radicals, including Pierre-Joseph Proudhon (1809-65) and the Chartist movement in England, had long used language that modern-day readers would identify as “Marxist”—“to enjoy political equality, abolish property”; “reserve army of labour” and so forth.
  • ...7 more annotations...
  • What, then, was his contribution?
  • Far more significantly, he attempted to provide an overall theoretical description of how capitalism worked
  • in many parts the author is highly critical. For instance, he points out that Marx displayed “condescension towards developments in political economy”
  • More damning, the “Grundrisse”, an unfinished manuscript which many neo-Marxists see as a treasure trove of theory, has “defects [in the] core arguments”.
  • The author encapsulates a feeling of many students of Marx: read the dense, theoretical chapters of “Capital” closely, and no matter how much you try, it is hard to escape the conclusion that there is plenty of nonsense in there.
  • The real value of such a work, in Mr Stedman Jones’s eyes, lies in its documentation of the actual day-to-day life faced by the English working classes.
  • He did not pay enough attention, for example, to objective measures of living standards (such as real wages), which by the 1850s were clearly improving.
Javier E

The Human Library Organisation replaces pages with people - Human resources - 0 views

  • Reading a work of fiction is therefore like getting to know a person. The more you learn about their stories, the less you “judge them by their cover”. 
  • Launched in Denmark 17 years ago, the project hosts events in libraries where users can “check out” a human for half an hour to hear their stories. “Readers” may ask whatever questions they like, and “renew” their loan if they have more.
  • The “books” are selected from a catalogue of marginalised individuals: refugees, ex-strippers, single mothers, Muslim converts, homeless people, those affected by autism and so on.
  • ...1 more annotation...
  • It has held more than 600 events in over 80 countries, and has established semi-permanent libraries in various locations. 
  •  
    Bookmark
« First ‹ Previous 161 - 180 of 218 Next › Last »
Showing 20 items per page