Skip to main content

Home/ TOK Friends/ Group items tagged linkedin

Rss Feed Group items tagged

10More

The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices - Derek ... - 4 views

  • Atlantic.displayRandomElement('#header li.business .sponsored-dropdown-item'); Derek Thompson - Derek Thompson is a senior editor at The Atlantic, where he oversees business coverage for the website. More Derek has also written for Slate, BusinessWeek, and the Daily Beast. He has appeared as a guest on radio and television networks, including NPR, the BBC, CNBC, and MSNBC. All Posts RSS feed Share Share on facebook Share on linkedin Share on twitter « Previous Thompson Email Print Close function plusOneCallback () { $(document).trigger('share'); } $(document).ready(function() { var iframeUrl = "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"; var toolsClicked = false; $('#toolsTop').click(function() { toolsClicked = 'top'; }); $('#toolsBottom').click(function() { toolsClicked = 'bottom'; }); $('#thanksForSharing a.hide').click(function() { $('#thanksForSharing').hide(); }); var onShareClickHandler = function() { var top = parseInt($(this).css('top').replace(/px/, ''), 10); toolsClicked = (top > 600) ? 'bottom' : 'top'; }; var onIframeReady = function(iframe) { var win = iframe.contentWindow; // Don't show the box if there's no ad in it if (win.$('.ad').children().length == 1) { return; } var visibleAds = win.$('.ad').filter(function() { return !($(this).css('display') == 'none'); }); if (visibleAds.length == 0) { // Ad is hidden, so don't show return; } if (win.$('.ad').hasClass('adNotLoaded')) { // Ad failed to load so don't show return; } $('#thanksForSharing').css('display', 'block'); var top; if(toolsClicked == 'bottom' && $('#toolsBottom').length) { top = $('#toolsBottom')[0].offsetTop + $('#toolsBottom').height() - 310; } else { top = $('#toolsTop')[0].offsetTop + $('#toolsTop').height() + 10; } $('#thanksForSharing').css('left', (-$('#toolsTop').offset().left + 60) + 'px'); $('#thanksForSharing').css('top', top + 'px'); }; var onShare = function() { // Close "Share successful!" AddThis plugin popup if (window._atw && window._atw.clb && $('#at15s:visible').length) { _atw.clb(); } if (iframeUrl == null) { return; } $('#thanksForSharingIframe').attr('src', "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"); $('#thanksForSharingIframe').load(function() { var iframe = this; var win = iframe.contentWindow; if (win.loaded) { onIframeReady(iframe); } else { win.$(iframe.contentDocument).ready(function() { onIframeReady(iframe); }) } }); }; if (window.addthis) { addthis.addEventListener('addthis.ready', function() { $('.articleTools .share').mouseover(function() { $('#at15s').unbind('click', onShareClickHandler); $('#at15s').bind('click', onShareClickHandler); }); }); addthis.addEventListener('addthis.menu.share', function(evt) { onShare(); }); } // This 'share' event is used for testing, so one can call // $(document).trigger('share') to get the thank you for // sharing box to appear. $(document).bind('share', function(event) { onShare(); }); if (!window.FB || (window.FB && !window.FB._apiKey)) { // Hook into the fbAsyncInit function and register our listener there var oldFbAsyncInit = (window.fbAsyncInit) ? window.fbAsyncInit : (function() { }); window.fbAsyncInit = function() { oldFbAsyncInit(); FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); }; } else if (window.FB) { FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); } }); The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices By Derek Thompson he
  • First, making a choice is physically exhausting, literally, so that somebody forced to make a number of decisions in a row is likely to get lazy and dumb.
  • Second, having too many choices can make us less likely to come to a conclusion. In a famous study of the so-called "paradox of choice", psychologists Mark Lepper and Sheena Iyengar found that customers presented with six jam varieties were more likely to buy one than customers offered a choice of 24.
  • ...7 more annotations...
  • Many of our mistakes stem from a central "availability bias." Our brains are computers, and we like to access recently opened files, even though many decisions require a deep body of information that might require some searching. Cheap example: We remember the first, last, and peak moments of certain experiences.
  • The third check against the theory of the rational consumer is the fact that we're social animals. We let our friends and family and tribes do our thinking for us
  • neurologists are finding that many of the biases behavioral economists perceive in decision-making start in our brains. "Brain studies indicate that organisms seem to be on a hedonic treadmill, quickly habituating to homeostasis," McFadden writes. In other words, perhaps our preference for the status quo isn't just figuratively our heads, but also literally sculpted by the hand of evolution inside of our brains.
  • The popular psychological theory of "hyperbolic discounting" says people don't properly evaluate rewards over time. The theory seeks to explain why many groups -- nappers, procrastinators, Congress -- take rewards now and pain later, over and over again. But neurology suggests that it hardly makes sense to speak of "the brain," in the singular, because it's two very different parts of the brain that process choices for now and later. The choice to delay gratification is mostly processed in the frontal system. But studies show that the choice to do something immediately gratifying is processed in a different system, the limbic system, which is more viscerally connected to our behavior, our "reward pathways," and our feelings of pain and pleasure.
  • the final message is that neither the physiology of pleasure nor the methods we use to make choices are as simple or as single-minded as the classical economists thought. A lot of behavior is consistent with pursuit of self-interest, but in novel or ambiguous decision-making environments there is a good chance that our habits will fail us and inconsistencies in the way we process information will undo us.
  • Our brains seem to operate like committees, assigning some tasks to the limbic system, others to the frontal system. The "switchboard" does not seem to achieve complete, consistent communication between different parts of the brain. Pleasure and pain are experienced in the limbic system, but not on one fixed "utility" or "self-interest" scale. Pleasure and pain have distinct neural pathways, and these pathways adapt quickly to homeostasis, with sensation coming from changes rather than levels
  • Social networks are sources of information, on what products are available, what their features are, and how your friends like them. If the information is accurate, this should help you make better choices. On the other hand, it also makes it easier for you to follow the crowd rather than engaging in the due diligence of collecting and evaluating your own information and playing it against your own preferences
27More

The Thread Vibes Are Off - by Anne Helen Petersen - 0 views

  • The way people post on Twitter is different from the way people post on LinkedIn which is different than how people post Facebook which is different from the way people post on Instagram, no matter how much Facebook keeps telling you to cross-post your IG stories
  • Some people whose job relies on onlineness (like me) have to refine their voices, their ways of being, across several platforms. But most normal people have found their lane — the medium that fits their message — and have stuck with it.
  • People post where they feel public speech “belongs.”
  • ...24 more annotations...
  • For some, the only speech they feel should be truly public should also be “professional.” Hence: LinkedIn, where the only associated image is a professional headshot, and the only conversations are those related to work.
  • Which is how some people really would like to navigate the public sphere: with total freedom and total impunity
  • Twitter is where you could publicly (if often anonymously) fight, troll, dunk, harass, joke, and generally speak without consequence; it’s also where the mundane status update/life musing (once the foundation of Facebook) could live peacefully.
  • Twitter was for publicly observing — through the scroll, but also by tweeting, retweeting, quote tweeting — while remaining effectively invisible, a reply-guy amongst reply-guys, a troll amongst trolls.
  • The Facebook of the 2010s was for broadcasting ideological stances under your real name and fighting with your close and extended community about them; now it’s (largely) about finding advice (and fighting about advice) in affinity groups (often) composed of people you’ve never met.
  • It rewards the esoteric, the visually witty, the mimetic — even more than Twitter.
  • Tiktok is for monologues, for expertise, for timing and performance. It’s without pretense.
  • On TikTok, you don’t reshare memes, you use them as the soundtrack to your reimagining, even if that reimagining is just “what if I do the same dance, only with my slightly dorky parents?
  • Instagram is serious and sincere (see: the success of the social justice slideshow) and almost never ironic — maybe because static visual irony is pretty hard to pull off.
  • Like YouTube, far fewer people are posting than consuming, which means that most people aren’t speaking at all.
  • And then there’s Instagram. People think Instagram is for extroverts, for people who want to broadcast every bit of their lives, but most Instagram users I know are shy — at least with public words. Instagram is where parents post pictures of their kids with the caption “these guys right here” or a picture of their dog with “a very good boy.”
  • The text doesn’t matter; the photo speaks loudest. Each post becomes overdetermined, especially when so readily viewed within the context of the greater grid
  • The more you understand your value as the sum of your visual parts, the more addictive, essential, and anxiety-producing Instagram becomes.
  • That emphasis on aesthetic perfection is part of what feminizes Instagram — but it’s also what makes it the most natural home for brands, celebrities, and influencers.
  • a static image can communicate a whole lifestyle — and brands have had decades of practice honing the practice in magazine ads and catalogs.
  • And what is an influencer if not a conduit for brands? What is a celebrity if not a conduit for their own constellation of brands?
  • If LinkedIn is the place where you can pretend that your whole life and personality is “business,” then Instagram is where you can pretend it’s all some form of leisure — or at least fun
  • A “fun” work trip, a “fun” behind-the-scenes shot, a brand doing the very hard work of trying to get you to click through and make a purchase with images that are fun fun fun.
  • On the flip side, Twitter was where you spoke with your real (verified) name — and with great, algorithm-assisted importance. You could amass clout simply by rephrasing others’ scoops in your own words, declaring opinions as facts, or just declaring. If Twitter was gendered masculine — which it certainly was, and is arguably even more so now — it was only because all of those behaviors are as well.
  • Instagram is a great place to post an announcement and feel celebrated or consoled but not feel like you have to respond to people
  • The conversation is easier to both control and ignore; of all the social networks, it most closely resembles the fawning broadcast style of the fan magazine, only the celebs control the final edit, not the magazine publisher
  • Celebrities initially glommed to Twitte
  • But its utility gradually faded: part of the problem was harassment, but part of it was context collapse, and the way it allowed words to travel across the platform and out of the celebrity’s control.
  • Instagram was just so much simpler, the communication so clearly in the celebrity wheelhouse. There is very little context collapse on Instagram — it’s all curation and control. As such, you can look interesting but say very little.
10More

Got Twitter? What's Your Influence Score - NYTimes.com - 1 views

  • IMAGINE a world in which we are assigned a number that indicates how influential we are. This number would help determine whether you receive a job, a hotel-room upgrade or free samples at the supermarket. If your influence score is low, you don’t get the promotion, the suite or the complimentary cookies.
  • it’s not enough to attract Twitter followers — you must inspire those followers to take action.
  • “Now you are being assigned a number in a very public way, whether you want it or not,”
  • ...7 more annotations...
  • “It’s going to be publicly accessible to the people you date, the people you work for. It’s fast becoming mainstream.”
  • Audi would begin offering promotions to Facebook users based on their Klout score. Last year, Virgin America used the company to offer highly rated influencers in Toronto free round-trip flights to San Francisco or Los Angeles.
  • If you have a Facebook, Twitter or LinkedIn account, you are already being judged — or will be soon. Companies with names like Klout, PeerIndex and Twitter Grader are in the process of scoring millions, eventually billions, of people on their level of influence — or in the lingo, rating “influencers.” Yet the companies are not simply looking at the number of followers or friends you’ve amassed. Rather, they are beginning to measure influence in more nuanced ways, and posting their judgments — in the form of a score — online.
  • focus your digital presence on one or two areas of interest. Don’t be a generalist. Most importantly: be passionate, knowledgeable and trustworthy.
  • As for influence in the offline world — it doesn’t count.
  • Klout “lacks sentiment analysis” — so a user who generates a lot of digital chatter might receive a high score even though what’s being said about the user is negative.
  • we are moving closer to creating “social media caste systems,” where people with high scores get preferential treatment by retailers, prospective employers, even prospective dates.
55More

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
13More

Elon Musk Hates Ads. Twitter Needs Them. That May Be a Problem. - The New York Times - 0 views

  • Since he started pursuing his $44 billion purchase of Twitter — and for years before that — the world’s richest man has made clear that advertising is not a priority.
  • Ads account for roughly 90 percent of Twitter’s revenue.
  • They have cited a litany of complaints, including that the company cannot target ads nearly as well as competitors like Facebook, Google and Amazon.
  • ...10 more annotations...
  • Now, numerous advertising executives say they’re willing to move their money elsewhere, especially if Mr. Musk eliminates the safeguards that allowed Twitter to remove racist rants and conspiracy theories.
  • “At the end of the day, it’s not the brands who need to be concerned, because they’ll just spend their budgets elsewhere — it’s Twitter that needs to be concerned,” said David Jones
  • On Wednesday night, at Twitter’s annual NewFronts presentation for advertisers at Pier 17 in New York, company representatives stressed Twitter’s value for marketers: as a top destination for people to gather and discuss major cultural moments like sporting events or the Met Gala, increasingly through video posts.
  • Twitter differs from Facebook, whose millions of small and midsize advertisers generate the bulk of the company’s revenue and depend on its enormous size and targeting abilities to reach customers. Twitter’s clientele is heavily weighted with large, mainstream companies, which tend to be wary of their ads appearing alongside problematic content.
  • Twitter earns the vast majority of its ad revenue from brand awareness campaigns, whose effectiveness is much harder to evaluate than ads that target users based on their interests or that push for a direct response, such as clicking through to a website.
  • Twitter’s reach is also narrower than many rivals, with 229 million users who see ads, compared with 830 million users on LinkedIn and 1.96 billion daily users on Facebook.
  • “Even the likes of LinkedIn have eclipsed the ability for us to target consumers beyond what Twitter is providing,” he said. “We’re going to go where the results are, and with a lot of our clients, we haven’t seen the performance on Twitter from an ad perspective that we have with other platforms.”
  • “Twitter’s done a better job than many platforms at building trust with advertisers — they’ve been more progressive, more responsive and more humble about initiating ways to learn,” said Mark Read
  • On Twitter, he has criticized ads as “manipulating public opinion” and discussed his refusal to “pay famous people to fake endorse.”
  • “There’s a fork in the road, where Path A leads to an unfiltered place with the worst of human behavior and no brands want to go anywhere near it,” said Mr. Jones of Brandtech. “And Path B has one of the world’s genius entrepreneurs, who knows a lot about running companies, unleashing a wave of innovation that has people looking back in a few years and saying, ‘Remember when everyone was worried about Musk coming in?’”
11More

Opinion | Have Some Sympathy - The New York Times - 0 views

  • Schools and parenting guides instruct children in how to cultivate empathy, as do workplace culture and wellness programs. You could fill entire bookshelves with guides to finding, embracing and sharing empathy. Few books or lesson plans extol sympathy’s virtues.
  • “Sympathy focuses on offering support from a distance,” a therapist explains on LinkedIn, whereas empathy “goes beyond sympathy by actively immersing oneself in another person’s emotions and attempting to comprehend their point of view.”
  • In use since the 16th century, when the Greek “syn-” (“with”) combined with pathos (experience, misfortune, emotion, condition) to mean “having common feelings,” sympathy preceded empathy by a good four centuries
  • ...8 more annotations...
  • Empathy (the “em” means “into”) barged in from the German in the 20th century and gained popularity through its usage in fields like philosophy, aesthetics and psychology. According to my benighted 1989 edition of Webster’s Unabridged, empathy was the more self-centered emotion, “the intellectual identification with or vicarious experiencing of the feelings, thoughts or attitudes of another.”
  • in more updated lexicons, it’s as if the two words had reversed. Sympathy now implies a hierarchy whereas empathy is the more egalitarian sentiment.
  • Sympathy, the session’s leader explained to school staff members, was seeing someone in a hole and saying, “Too bad you’re in a hole,” whereas empathy meant getting in the hole, too.
  • “Empathy is a choice and it’s a vulnerable choice because in order to connect with you, I have to connect with something in myself that knows that feeling,”
  • Still, it’s hard to square the new emphasis on empathy — you must feel what others feel — with another element of the current discourse. According to what’s known as “standpoint theory,” your view necessarily depends on your own experience: You can’t possibly know what others feel.
  • In short, no matter how much an empath you may be, unless you have actually been in someone’s place, with all its experiences and limitations, you cannot understand where that person is coming from. The object of your empathy may find it presumptuous of you to think that you “get it.”
  • Bloom asks us to imagine what empathy demands should a friend’s child drown. “A highly empathetic response would be to feel what your friend feels, to experience, as much as you can, the terrible sorrow and pain,” he writes. “In contrast, compassion involves concern and love for your friend, and the desire and motivation to help, but it need not involve mirroring your friend’s anguish.”
  • Bloom argues for a more rational, modulated, compassionate response. Something that sounds a little more like our old friend sympathy.
4More

Apple backed by more online giants in FBI iPhone unlock battle - BBC News - 0 views

  • The FBI has a court order demanding Apple helps unlock an iPhone used by the gunman behind the San Bernardino terror attack, Syed Rizwan Farook.
  • Apple has appealed against the court order, arguing that it should not be forced to weaken the security of its own products.
  • Apple has argued that the move would jeopardise the trust it has with its customers and create a backdoor for government agencies to access customer data.
  • ...1 more annotation...
  • Twitter, AirBnB, Ebay, LinkedIn and Reddit are among a group of 17 major online companies to have formally backed Apple in its court dispute with the FBI.
22More

Facebook Has 50 Minutes of Your Time Each Day. It Wants More. - The New York Times - 0 views

  • Fifty minutes.That’s the average amount of time, the company said, that users spend each day on its Facebook, Instagram and Messenger platforms
  • there are only 24 hours in a day, and the average person sleeps for 8.8 of them. That means more than one-sixteenth of the average user’s waking time is spent on Facebook.
  • That’s more than any other leisure activity surveyed by the Bureau of Labor Statistics, with the exception of watching television programs and movies (an average per day of 2.8 hours)
  • ...19 more annotations...
  • It’s more time than people spend reading (19 minutes); participating in sports or exercise (17 minutes); or social events (four minutes). It’s almost as much time as people spend eating and drinking (1.07 hours).
  • the average time people spend on Facebook has gone up — from around 40 minutes in 2014 — even as the number of monthly active users has surged. And that’s just the average. Some users must be spending many hours a day on the site,
  • time has become the holy grail of digital media.
  • Time is the best measure of engagement, and engagement correlates with advertising effectiveness. Time also increases the supply of impressions that Facebook can sell, which brings in more revenue (a 52 percent increase last quarter to $5.4 billion).
  • And time enables Facebook to learn more about its users — their habits and interests — and thus better target its ads. The result is a powerful network effect that competitors will be hard pressed to match.
  • the only one that comes close is Alphabet’s YouTube, where users spent an average of 17 minutes a day on the site. That’s less than half the 35 minutes a day users spent on Facebook
  • ComScore reported that television viewing (both live and recorded) dropped 2 percent last year, and it said younger viewers in particular are abandoning traditional live television. People ages 18-34 spent just 47 percent of their viewing time on television screens, and 40 percent on mobile devices.
  • People spending the most time on Facebook also tend to fall into the prized 18-to-34 demographic sought by advertisers.
  • “You hear a narrative that young people are fleeing Facebook. The data show that’s just not true. Younger users have a wider appetite for social media, and they spend a lot of time on multiple networks. But they spend more time on Facebook by a wide margin.”
  • What aren’t Facebook users doing during the 50 minutes they spend there? Is it possibly interfering with work (and productivity), or, in the case of young people, studying and reading?
  • While the Bureau of Labor Statistics surveys nearly every conceivable time-occupying activity (even fencing and spelunking), it doesn’t specifically tally the time spent on social media, both because the activity may have multiple purposes — both work and leisure — and because people often do it at the same time they are ostensibly engaged in other activities
  • The closest category would be “computer use for leisure,” which has grown from eight minutes in 2006, when the bureau began collecting the data, to 14 minutes in 2014, the most recent survey. Or perhaps it would be “socializing and communicating with others,” which slipped from 40 minutes to 38 minutes.
  • But time spent on most leisure activities hasn’t changed much in those eight years of the bureau’s surveys. Time spent reading dropped from an average of 22 minutes to 19 minutes. Watching television and movies increased from 2.57 hours to 2.8. Average time spent working declined from 3.4 hours to 3.25. (Those hours seem low because much of the population, which includes both young people and the elderly, does not work.)
  • The bureau’s numbers, since they cover the entire population, may be too broad to capture important shifts among important demographic groups
  • Users spent an average of nine minutes on all of Yahoo’s sites, two minutes on LinkedIn and just one minute on Twitter
  • Among those 55 and older, 70 percent of their viewing time was on television, according to comScore. So among young people, much social media time may be coming at the expense of traditional television.
  • comScore’s data suggests that people are spending on average just six to seven minutes a day using social media on their work computers. “I don’t think Facebook is displacing other activity,” he said. “People use it during downtime during the course of their day, in the elevator, or while commuting, or waiting.
  • Facebook, naturally, is busy cooking up ways to get us to spend even more time on the platform
  • A crucial initiative is improving its News Feed, tailoring it more precisely to the needs and interests of its users, based on how long people spend reading particular posts. For people who demonstrate a preference for video, more video will appear near the top of their news feed. The more time people spend on Facebook, the more data they will generate about themselves, and the better the company will get at the task.
12More

Quitters Never Win: The Costs of Leaving Social Media - Woodrow Hartzog and Evan Seling... - 2 views

  • Manjoo offers this security-centric path for folks who are anxious about the service being "one the most intrusive technologies ever built," and believe that "the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club". Bottom line: stop tuning in and start dropping out if you suspect that the culture of oversharing, digital narcissism, and, above all, big-data-hungry, corporate profiteering will trump privacy settings.
  • Angwin plans on keeping a bare-bones profile. She'll maintain just enough presence to send private messages, review tagged photos, and be easy for readers to find. Others might try similar experiments, perhaps keeping friends, but reducing their communication to banal and innocuous expressions. But, would such disclosures be compelling or sincere enough to retain the technology's utility?
  • The other unattractive option is for social web users to willingly pay for connectivity with extreme publicity.
  • ...9 more annotations...
  • go this route if you believe privacy is dead, but find social networking too good to miss out on.
  • While we should be attuned to constraints and their consequences, there are at least four problems with conceptualizing the social media user's dilemma as a version of "if you can't stand the heat, get out of the kitchen".
  • The efficacy of abandoning social media can be questioned when others are free to share information about you on a platform long after you've left.
  • Second, while abandoning a single social technology might seem easy, this "love it or leave it" strategy -- which demands extreme caution and foresight from users and punishes them for their naivete -- isn't sustainable without great cost in the aggregate. If we look past the consequences of opting out of a specific service (like Facebook), we find a disconcerting and more far-reaching possibility: behavior that justifies a never-ending strategy of abandoning every social technology that threatens privacy -- a can being kicked down the road in perpetuity without us resolving the hard question of whether a satisfying balance between protection and publicity can be found online
  • if your current social network has no obligation to respect the obscurity of your information, what justifies believing other companies will continue to be trustworthy over time?
  • Sticking with the opt-out procedure turns digital life into a paranoid game of whack-a-mole where the goal is to stay ahead of the crushing mallet. Unfortunately, this path of perilously transferring risk from one medium to another is the direction we're headed if social media users can't make reasonable decisions based on the current context of obscurity, but instead are asked to assume all online social interaction can or will eventually lose its obscurity protection.
  • The fourth problem with the "leave if you're unhappy" ethos is that it is overly individualistic. If a critical mass participates in the "Opt-Out Revolution," what would happen to the struggling, the lonely, the curious, the caring, and the collaborative if the social web went dark?
  • Our point is that there is a middle ground between reclusion and widespread publicity, and the reduction of user options to quitting or coping, which are both problematic, need not be inevitable, especially when we can continue exploring ways to alleviate the user burden of retreat and the societal cost of a dark social web.
  • it is easy to presume that "even if you unfriend everybody on Facebook, and you never join Twitter, and you don't have a LinkedIn profile or an About.me page or much else in the way of online presence, you're still going to end up being mapped and charted and slotted in to your rightful place in the global social network that is life." But so long it remains possible to create obscurity through privacy enhancing technology, effective regulation, contextually appropriate privacy settings, circumspect behavior, and a clear understanding of how our data can be accessed and processed, that fatalism isn't justified.
4More

If Twitter is a Work Necessity - NYTimes.com - 0 views

  • For midcareer executives, particularly in the media and related industries, knowing how to use Twitter, update your timeline on Facebook, pin on Pinterest, check in on Foursquare and upload images on Instagram are among the digital skills that some employers expect people to have to land a job or to flourish in a current role.
  • digital literacy, including understanding social networking, is now a required skill. “They are essential skills that are needed to operate in the world and in the workplace,” she said. “And people will either need to learn through formal training or through their networks or they will feel increasingly left out.”
  • “If you don’t have a LinkedIn or Facebook account, then employers often don’t have a way to find out about you,” she said.
  • ...1 more annotation...
  • “We have to think about social media in a new strategic way,” he said. “It is no longer something that we can ignore. It is not a place to just wish your friends happy birthday. It is a place of business. It is a place where your career will be enhanced or degraded, depending on your use of these tools and services.”
19More

Creativity Becomes an Academic Discipline - NYTimes.com - 0 views

  • Once considered the product of genius or divine inspiration, creativity — the ability to spot problems and devise smart solutions — is being recast as a prized and teachable skill.
  • “The reality is that to survive in a fast-changing world you need to be creative,”
  • “That is why you are seeing more attention to creativity at universities,” he says. “The marketplace is demanding it.”
  • ...16 more annotations...
  • Creativity moves beyond mere synthesis and evaluation and is, he says, “the higher order skill.” This has not been a sudden development. Nearly 20 years ago “creating” replaced “evaluation” at the top of Bloom’s Taxonomy of learning objectives. In 2010 “creativity” was the factor most crucial for success found in an I.B.M. survey of 1,500 chief executives in 33 industries. These days “creative” is the most used buzzword in LinkedIn profiles two years running.
  • The method, which is used in Buffalo State classrooms, has four steps: clarifying, ideating, developing and implementing. People tend to gravitate to particular steps, suggesting their primary thinking style.
  • What’s igniting campuses, though, is the conviction that everyone is creative, and can learn to be more so.
  • Just about every pedagogical toolbox taps similar strategies, employing divergent thinking (generating multiple ideas) and convergent thinking (finding what works).The real genius, of course, is in the how.
  • as content knowledge evolves at lightning speed, educators are talking more and more about “process skills,” strategies to reframe challenges and extrapolate and transform information, and to accept and deal with ambiguity.
  • Clarifying — asking the right question — is critical because people often misstate or misperceive a problem. “If you don’t have the right frame for the situation, it’s difficult to come up with a breakthrough,
  • Ideating is brainstorming and calls for getting rid of your inner naysayer to let your imagination fly.
  • Developing is building out a solution, and maybe finding that it doesn’t work and having to start over
  • Implementing calls for convincing others that your idea has value.
  • “the frequency and intensity of failures is an implicit principle of the course. Getting into a creative mind-set involves a lot of trial and error.”
  • His favorite assignments? Construct a résumé based on things that didn’t work out and find the meaning and influence these have had on your choices.
  • “Examine what in the culture is preventing you from creating something new or different. And what is it like to look like a fool because a lot of things won’t work out and you will look foolish? So how do you handle that?”
  • Because academics run from failure, Mr. Keywell says, universities are “way too often shapers of formulaic minds,” and encourage students to repeat and internalize fail-safe ideas.
  • “The new people who will be creative will sit at the juxtaposition of two or more fields,” she says. When ideas from different fields collide, Dr. Cramond says, fresh ones are generated.
  • Basic creativity tools used at the Torrance Center include thinking by analogy, looking for and making patterns, playing, literally, to encourage ideas, and learning to abstract problems to their essence.
  • students explore definitions of creativity, characteristics of creative people and strategies to enhance their own creativity.These include rephrasing problems as questions, learning not to instinctively shoot down a new idea (first find three positives), and categorizing problems as needing a solution that requires either action, planning or invention.
9More

College Scorecard Sandbags Equity in Higher Education | Patricia McGuire - 0 views

  • the "haves" in higher education have quite a lot; the "have nots" struggle mightily. And this economic chasm is seriously influenced by gender, race and social class -- issues on which the College Scorecard is silent, but which affect just about every factoid presented
  • The reality is that even smart wonks educated at some of the best of the "haves" can be blind to social reality; their monument to algorithmic gymnastics in the College Scorecard obscures some of the most important and painful facts about college life and American society today.
  • The administration presents the collegiate earnings data as if it were value-neutral, not only with no reference to the mission of institutions that may have different values from those the administration apparently exalts, but even more devastatingly, with no reference to the pernicious effects of gender and race discrimination on career opportunities and earnings.
  • ...6 more annotations...
  • I am not a wonk, but I did prepare this chart based on data in the College Scorecard and the federal data system IPEDS
  • The value-neutral approach to the collegiate earnings data ignores the facts of life about women and families.
  • 74% of all undergraduates have at least one "non-traditional" characteristic, and more than 55% have two or more non-traditional characteristics such as having children, being a caregiver, delaying college enrollment, attending part-time, working full-time.
  • But the College Scorecard completely ignores the increasingly non-traditional nature of the nation's undergraduate student body today, and instead, presents data as if most college students are privileged children whiling away four years in some grove of academic luxury
  • The Obama administration claims that the new College Scorecard will provide more "transparent" data to students and families trying to decide which college to attend. Unfortunately, by presenting some data in value-neutral or misleading ways, and ignoring other truly important questions in the college choice process
  • the administration presents a data mashup with limited utility for consumers but large potential for misrepresentation of social realities.
11More

It's Time for a Real Code of Ethics in Teaching - Noah Berlatsky - The Atlantic - 3 views

  • More 5inShare Email Print A defendant in the Atlanta Public Schools case turns herself in at the Fulton County Jail on April 2. (David Goldman/AP) Earlier this week at The Atlantic, Emily Richmond asked whether high-stakes testing caused the Atlanta schools cheating scandal. The answer, I would argue, is yes... just not in the way you might think. Tests don't cause unethical behavior. But they did cause the Atlanta cheating scandal, and they are doing damage to the teaching profession. The argument that tests do not cause unethical behavior is fairly straightforward, and has been articulated by a number of writers. Jonathan Chait quite correctly points out that unethical behavior occurs in virtually all professions -- and that it occurs particularly when there are clear incentives to succeed. Incentivizing any field increases the impetus to cheat. Suppose journalism worked the way teaching traditionally had. You get hired at a newspaper, and your advancement and pay are dictated almost entirely by your years on the job, with almost no chance of either becoming a star or of getting fired for incompetence. Then imagine journalists changed that and instituted the current system, where you can get really successful if your bosses like you or be fired if they don't. You could look around and see scandal after scandal -- phone hacking! Jayson Blair! NBC's exploding truck! Janet Cooke! Stephen Glass! -- that could plausibly be attributed to this frightening new world in which journalists had an incentive to cheat in order to get ahead. It holds true of any field. If Major League Baseball instituted tenure, and maybe used tee-ball rules where you can't keep score and everybody gets a chance to hit, it could stamp out steroid use. Students have been cheating on tests forever -- massive, systematic cheating, you could say. Why? Because they have an incentive to do well. Give teachers and administrators an incentive for their students to do well, and more of them will cheat. For Chait, then, teaching has just been made more like journalism or baseball; it has gone from an incentiveless occupation to one with incentives.
  • Chait refers to violations of journalistic ethics -- like the phone-hacking scandal -- and suggests they are analogous to Major-League steroid use, and that both are similar to teachers (or students) cheating on tests. But is phone hacking "cheating"
  • Phone hacking was, then, not an example of cheating. It was a violation of professional ethics. And those ethics are not arbitrarily imposed, but are intrinsic to the practice of journalism as a profession committed to public service and to truth.
  • ...8 more annotations...
  • Behaving ethically matters, but how it matters, and what it means, depends strongly on the context in which it occurs.
  • Ethics for teachers is not, apparently, first and foremost about educating their students, or broadening their minds. Rather, ethics for teachers in our current system consists in following the rules. The implicit, linguistic signal being given is that teachers are not like journalists or doctors, committed to a profession and to the moral code needed to achieve their professional goals. Instead, they are like athletes playing games, or (as Chait says) like children taking tests.
  • Using "cheating" as an ethical lens tends to both trivialize and infantilize teacher's work
  • Professions with social respect and social capital, like doctors and lawyers, collaborate in the creation of their own standards. The assumption is that those standards are intrinsic to the profession's goals, and that, therefore, professionals themselves are best equipped to establish and monitor them. Teachers' standards, though, are imposed from outside -- as if teachers are children, or as if teaching is a game.
  • High-stakes testing, then, does leads to cheating. It does not create unethical behavior -- but it does create the particular unethical behavior of "cheating."
  • We have reached a point where we can only talk about the ethics of the profession in terms of cheating or not cheating, as if teachers' main ethical duty is to make sure that scantron bubbles get filled in correctly. Teachers, like journalists, should have a commitment to truth; like doctors, they have a duty of care. Translating those commitments and duties into a bureaucratized measure of cheating-or-not-cheating diminishes ethic
  • For teachers it is, literally, demoralizing. It severs the moral experience of teaching from the moral evaluation of teaching, which makes it almost impossible for good teachers (in all the senses of "good") to stay in the system.
  • We need better ethics for teachers -- ethics that treat them as adults and professionals, not like children playing games.
46More

'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia | Technol... - 0 views

  • Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.
  • “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”
  • most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
  • ...43 more annotations...
  • There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
  • Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.
  • Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.
  • “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.
  • One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.
  • Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.
  • “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”
  • He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.
  • The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.
  • Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.
  • But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?
  • Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”
  • Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
  • “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.
  • He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
  • The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.
  • Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.
  • It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
  • Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
  • It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”
  • The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.
  • “Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”
  • All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”
  • He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
  • McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”
  • But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?
  • McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail.
  • Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”
  • James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.
  • It is a journey that has led him to question whether democracy can survive the new technological age.
  • He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?
  • That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.
  • Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.
  • “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.
  • g. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”
  • That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.
  • It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
  • All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.
  • He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.
  • All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.
  • It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.
  • If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?
  • “The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.”
5More

Opinion | Scott Pruitt's Attack on Science Would Paralyze the E.P.A. - The New York Times - 0 views

  • It is his latest effort to cripple the agency. Mr. Pruitt, who as Oklahoma’s attorney general described himself as “a leading advocate against the E.P.A.’s activist agenda,” said in an interview published in The Daily Caller last week that he would no longer allow the agency to use studies that include nonpublic scientific data to develop rules to safeguard public health and prevent pollution.
  • Opponents of the agency and of mainstream climate science call these studies “secret science.” But that’s simply not true. Peer review ensures that the analytic methodologies underlying studies funded by the agency are sound.
  • Some of those studies, particularly those that determine the effects of exposure to chemicals and pollution on health, rely on medical records that by law are confidential because of patient privacy policies. These studies summarize the analysis of raw data and draw conclusions based on that analysis. Other government agencies also use studies like these to develop policy and regulations, and to buttress and defend rules against legal challenges. They are, in fact, essential to making sound public policy.
  • ...2 more annotations...
  • The agency also relies on industry data to develop rules on chemical safety that is often kept confidential for business reasons.
  • So why would he want to prohibit his own agency from using these studies? It’s not a mystery. Time and again the Trump administration has put the profits of regulated industries over the health of the American people. Fundamental research on the effects of air pollution on public health has long been a target of those who oppose the E.P.A.’s air quality regulations, like the rule that requires power plants to reduce their mercury emissions.
6More

I asked Tinder for my data. It sent me 800 pages of my deepest, darkest secrets | Techn... - 0 views

  • I emailed Tinder requesting my personal data and got back way more than I bargained for. Some 800 pages came back containing information such as my Facebook “likes”, my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened … the list goes on.
  • “You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”
  • What will happen if this treasure trove of data gets hacked, is made public or simply bought by another company? I can almost feel the shame I would experience. The thought that, before sending me these 800 pages, someone at Tinder might have read them already makes me cringe.
  • ...3 more annotations...
  • In May, an algorithm was used to scrape 40,000 profile images from the platform in order to build an AI to “genderise” faces. A few months earlier, 70,000 profiles from OkCupid (owned by Tinder’s parent company Match Group) were made public by a Danish researcher some commentators have labelled a “white supremacist”, who used the data to try to establish a link between intelligence and religious beliefs. The data is still out there.
  • The trouble is these 800 pages of my most intimate data are actually just the tip of the iceberg. “Your personal data affects who you see first on Tinder, yes,” says Dehaye. “But also what job offers you have access to on LinkedIn, how much you will pay for insuring your car, which ad you will see in the tube and if you can subscribe to a loan. “We are leaning towards a more and more opaque society, towards an even more intangible world where data collected about you will decide even larger facets of your life. Eventually, your whole existence will be affected.”
  • As a typical millennial constantly glued to my phone, my virtual life has fully merged with my real life. There is no difference any more. Tinder is how I meet people, so this is my reality. It is a reality that is constantly being shaped by others – but good luck trying to find out how.
28More

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
17More

Opinion | Why a Digital Diary Will Change Your Life - The New York Times - 0 views

  • At first, my plan was to do what I always do when I see something halfway noteworthy, which is to tell a few hundred thousand people on Twitter, Facebook, Instagram or, in my lowest moments, even LinkedIn.
  • Smartphones and social networks have turned me into a lonely, needy man who requires constant affirmation. In desperate pursuit of such affirmation, my mind has come to resemble one of those stamping-machine assembly lines you see in cartoons, but for shareable content: The raw, analog world in all its glory enters via conveyor belt on one end, and, after some raucous puffs of smoke, it gets flattened and packaged in my head into insipid quips meant to inspire you to tap a tiny heart on a screen.
  • instead of sharing the silly lampshade joke, I journaled it in Day One, a magnificent digital diary app that has transformed my relationship with my phone, improved my memory, and given me a deeper perspective on my life than the one I was getting through the black mirror of social media.
  • ...14 more annotations...
  • In recent years, Twitter and much of the rest of the internet have been getting hotter, more reflexively outraged, less fun. Venturing onto social media these days, I often feel like a cat burglar stepping through a field of upturned rakes. I could imagine my dumb joke getting picked apart for all the ways it was problematic — “New York Times writer casually encourages bestial sexual assault! #deertoo” — bringing me ever closer to cancellation.
  • It’s unsocial. Indeed, it’s downright antisocial. Nothing about the app is meant to be shared — it is protected with your Apple security credentials and backs up its data to the cloud using end-to-end encryption, so that the only way someone can get into your diary is by getting hold of your device and your system passcode.
  • You post updates to it just as you might on Instagram or Facebook.
  • The app — which runs on Macs, iPhones and iPads, syncing your entries between your devices — can handle long text journals, short picture-focused status updates, and pretty much anything else that comes across the digital transom.
  • I use it to jot down my deepest thoughts and shallowest jokes; to rant and to vent; to come to terms with new ideas I’m playing with, ideas that need time to marinate in secret before they’re ready for the world; and to collect and reflect upon all the weird and crazy and touching artifacts of life
  • Think of Day One as a private social network for an audience of one: yourself.
  • Day One creates something so rare it feels almost sacred: A completely private digital space.
  • The best way to describe this feeling is to liken it to friendship. I feel comfortable dishing to Day One the way I would to a close friend I trust completely.
  • one of the few digital spaces that provides you mental space for contemplation and consideration
  • journaling has been shown to be good for mind and body, reducing stress and anxiety, improving interpersonal relationships, and promoting creativity
  • a digital journal offers several benefits over paper. Easy accessibility is a big one
  • you can tap out a journal while you’re in line at the supermarket
  • because so much happens on screens now, Day One offers greater fidelity to daily life. Instead of describing the insane conversation I had with my co-worker, I can just post a screenshot.
  • photography, which adds emotional heft to the rigidity of text.
13More

The Adams Principle ❧ Current Affairs - 0 views

  • This type of glib quasi-logic works really well in comedy, especially in a format where space is restricted, and where the quick, disposable nature of the strip limits your ability to draw humor from character and plot. You take an idea, find a way to subvert or deconstruct it, and you get an absurd result.
  • while the idea of a “cubicle job” can seem to younger readers like relative bliss, they were (and are) still an emblem of boredom and absurdity, a sign that life was being slowly colonized by gray shapes and Powerpoint slides. Throughout his classic-era work, Adams hits on the feeling that the world has been made unnatural, unconducive to life; materially adequate, but spiritually exhausting. 
  • He makes constant use of something I’m going to call, for want of a better term, the sophoid: something which has the outer semblance of wisdom, but none of the substance; something that sounds weighty if you say it confidently enough, yet can be easily thrown away as “just a thought” if it won’t hold up to scrutiny.
  • ...10 more annotations...
  • Adams did not just stick to comics: he is the author of over a dozen books (not counting the comic compendiums), which advise and analyze not only on surviving the office but also on daily life, future technology trends, romance, self-help strategy, and more. 
  • In his earlier books, you can feel the weight of the 1990s pressing down on his work, flattening and numbing its potency; this was the period that social scientist Francis Fukuyama dubbed “the end of history”, when the Cold War had ended, the West had won, 9/11 was just two numbers, and there were no grand missions left, no worlds left to conquer. While for millions of people, both in the United States and abroad, life was still chaotic and miserable, a lot of people found themselves living lives that were under no great immediate threat: without bombs or fascism or the threat of eviction to worry about, there was nothing left to do but to go to the office and enjoy fast-casual dining and Big Gulps, just as the Founding Fathers envisioned.
  • This dull but steady life produced a sense of slow-burn anxiety prominent in much of the pop culture of the time, as can be seen in movies such as Office Space, Fight Club and The Matrix, movies which cooed to their audience: there’s got to be more to life than this, right?
  • Beware: as I’m pretty sure Nietzsche said, when you gaze into Dilbert, eventually Dilbert gazes back into you.
  • for someone who satirizes business bullshit, Adams is a person who seems to have bought into much of it wholeheartedly; when he explains his approach to life he tends to speak in LinkedIn truisms, expounding on his “skill stacks” and “maximizing [his] personal energy”. (You can read more about this in his career advice book, How to Fail at Almost Everything and Still Win Big;
  • Following his non-Dilbert career more carefully, you can see that at every stage of his career, he’s actually quite heavily invested in the bullshit he makes fun of every day, or at least some aspects of it: he possesses an MBA from UC Berkeley, and has launched or otherwise been involved in a significant number of business ventures, most amusingly a health food wrap called the “Dilberito”.
  • In the past few years, Adams has gained some notoriety as a Trump supporter; having slowly moved from “vaguely all-over-the-place centrist who has some odd thoughts and thinks some aspects of Trump are impressive” to full-on MAGA guy, even writing a book called Win Bigly praising Trump’s abilities as a “master persuader”.
  • this is a guy who hates drab corporatespeak but loves the ideology behind it, a guy who describes the vast powerlessness of life but believes you can change it by writing some words on a napkin. That blend of rebellion against the symptoms of post-Cold War society and sworn allegiance to its machinations couldn’t lead anywhere else but to Trump, a man who rails against ‘elites’ while allowing them to run the country into the ground.
  • In Dilbert the Pointy-haired Boss uses this type of thinking to evil ends, in the tradition of Catch-22 and other satires of systemic brutality, but the relatable characters use it to their advantage too—by using intellectual sleight of hand with the boss to justify doing less work, or by finding clever ways to look busy when they’re not, or to avoid people who are unpleasant to be around.
  • I just think Adams is a guy who spent so long in the world of slick aphorisms and comic-strip logic that it eventually ate into his brain, became his entire manner of thinking
6More

The Great PowerPoint Panic of 2003 - The Atlantic - 0 views

  • if all of those bad presentations really led to broad societal ills, the proof is hard to find.
  • Some scientists have tried to take a formal measure of the alleged PowerPoint Effect, asking whether the software really influences our ability to process information. Sebastian Kernbach, a professor of creativity and design at the University of St. Gallen, in Switzerland, has co-authored multiple reviews synthesizing this literature. On the whole, he told me, the research suggests that Tufte was partly right, partly wrong. PowerPoint doesn’t seem to make us stupid—there is no evidence of lower information retention or generalized cognitive decline, for example, among those who use it—but it does impose a set of assumptions about how information ought to be conveyed: loosely, in bullet points, and delivered by presenters to an audience of passive listeners. These assumptions have even reshaped the physical environment for the slide-deck age, Kernbach said: Seminar tables, once configured in a circle, have been bent, post-PowerPoint, into a U-shape to accommodate presenters.
  • When I spoke with Kernbach, he was preparing for a talk on different methods of visual thinking to a group of employees at a large governmental organization. He said he planned to use a flip chart, draw on blank slides like a white board, and perhaps even have audience members do some drawing of their own. But he was also gearing up to use regular old PowerPoint slides. Doing so, he told me, would “signal preparation and professionalism” for his audience. The organization was NASA.
  • ...3 more annotations...
  • The fact that the American space agency still uses PowerPoint should not be surprising. Despite the backlash it inspired in the press, and the bile that it raised in billionaires, and the red alert it caused within the military, the corporate-presentation juggernaut rolls on. The program has more monthly users than ever before, according to Shawn Villaron, Microsoft’s vice president of product for PowerPoint—well into the hundreds of millions. If anything, its use cases have proliferated. During lockdown, people threw PowerPoint parties on Zoom. Kids now make PowerPoint presentations for their parents when they want to get a puppy or quit soccer or attend a Niall Horan meet and greet. If PowerPoint is evil, then evil rules the world.
  • it’s tempting to entertain counterfactuals and wonder how things might have played out if Tufte and the rest of us had worried about social media back in 2003 instead of presentation software. Perhaps a timely pamphlet on The Cognitive Style of Friendster or a Wired headline asserting that “LinkedIn Is Evil” would have changed the course of history. If the social-media backlash of the past few years had been present from the start, maybe Facebook would never have grown into the behemoth it is now, and the country would never have become so hopelessly divided.
  • it could be that nothing whatsoever would have changed. No matter what their timing, and regardless of their aptness, concerns about new media rarely seem to make a difference. Objections get steamrolled. The new technology takes over. And years later, when we look back and think, How strange that we were so perturbed, the effects of that technology may well be invisible.
1 - 20 of 24 Next ›
Showing 20 items per page