Skip to main content

Home/ TOK Friends/ Group items tagged gatekeepers

Rss Feed Group items tagged

Javier E

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
Javier E

The Fall of Facebook - The Atlantic - 0 views

  • Alexis C. Madrigal Nov 17 2014, 7:59 PM ET Social networking is not, it turns out, winner take all. In the past, one might have imagined that switching between Facebook and “some other network” would be difficult, but the smartphone interface makes it easy to be on a dozen networks. All messages come to the same place—the phone’s notifications screen—so what matters is what your friends are doing, not which apps they’re using.
  • if I were to put money on an area in which Facebook might be unable to dominate in the future, it would be apps that take advantage of physical proximity. Something radically new could arise on that front, whether it’s an evolution of Yik Yak
  • The Social Machine, predicts that text will be a less and less important part of our asynchronous communications mix. Instead, she foresees a “very fluid interface” that would mix text with voice, video, sensor outputs (location, say, or vital signs), and who knows what else
  • ...5 more annotations...
  • the forthcoming Apple Watch seems like a step toward the future Donath envisions. Users will be able to send animated smiley faces, drawings, voice snippets, and even their live heartbeats, which will be tapped out on the receiver’s wrist.
  • A simple but rich messaging platform—perhaps with specialized hardware—could replace the omnibus social network for most purposes. “I think we’re shifting in a weird way to one-on-one conversations on social networks and in messaging apps,” says Shani Hilton, the executive editor for news at BuzzFeed, the viral-media site. “People don’t want to perform their lives publicly in the same way that they wanted to five years ago.”
  • Facebook is built around a trade-off that it has asked users to make: Give us all your personal information, post all your pictures, tag all your friends, and so on, forever. In return, we’ll optimize your social life. But this output is only as good as the input. And it turns out that, when scaled up, creating this input—making yourself legible enough to the Facebook machine that your posts are deemed “relevant” and worthy of being displayed to your mom and your friends—is exhausting labor.
  • These new apps, then, are arguments that we can still have an Internet that is weird, and private. That we can still have social networks without the social network. And that we can still have friends on the Internet without “friending” them.
  • A Brief History of Information Gatekeepers 1871: Western Union controls 90 percent of U.S. telegraph traffic. 1947: 97 percent of the country’s radio stations are affiliated with one of four national networks. 1969: Viewership for the three nightly network newscasts hits an all-time high, with 50 percent of all American homes tuning in. 1997: About half of all American homes with Internet access get it through America Online. 2002: Microsoft Internet Explorer captures 97 percent of the worldwide browser market. 2014: Amazon sells 63 percent of all books bought online—and 40 percent of books overall.
Javier E

The Fall of Facebook - The Atlantic - 0 views

  • When a research company looked at how people use their phones, it found that they spend more time on Facebook than they do browsing the entire rest of the Web.
  • Digital-media companies have grown reliant on Facebook’s powerful distribution capabilities.
  • this weakens the basic idea of a publication. The media bundles known as magazines and newspapers were built around letting advertisers reach an audience. But now virtually all of the audiences are in the same place, and media entities and advertisers alike know how to target them: they go to Facebook, select some options from a drop-down menu—18-to-24-year-old men in Maryland who are college-football fans—and their ads materialize in the feeds of that demographic.
  • ...9 more annotations...
  • when Google was the dominant distribution force on the Web, that fact was reflected in the kinds of content media companies produced—fact-filled, keyword-stuffed posts that Google’s software seemed to prefer.
  • while, once upon a time, everyone with a TV and an antenna could see “what was on,” Facebook news feeds are personalized, so no one outside the company actually knows what anyone else is seeing. This opacity would have been impossible to imagine in previous eras.
  • it is the most powerful information gatekeeper the world has ever known. It is only slightly hyperbolic to say that Facebook is like all the broadcast-television networks put together.
  • Facebook is different, though. It measures what is “engaging”—what you (and people you resemble, according to its databases) like, comment on, and share. Then it shows you more things related to that.
  • Facebook has built a self-perpetuating optimization machine. It’s as if every time you turned on the TV, your cable box ranked every episode of every show just for you. Or when you went to a bar, only the people you’d been hanging out with regularly showed up
  • It’s all enough to make you wonder whether Facebook, unlike AOL or MySpace, really might be forever
  • “In three years of research and talking to hundreds of people and everyday users, I  don’t think I heard anyone say once, ‘I love Facebook,’ ”
  • The software’s primary attributes—its omniscience, its solicitousness—all too easily provoke claustrophobia.
  • users are spreading themselves around, maintaining Facebook as their social spine, but investing in and loving a wide variety of other social apps. None of them seems likely to supplant Facebook on its own, but taken together, they form a pretty decent network of networks, a dispersed alternative to Facebook life.
Javier E

(1) A Brief History of Media and Audiences and Twitter and The Bulwark - 0 views

  • In the old days—and here I mean even as recently as 2000 or 2004—audiences were built around media institutions. The New York Times had an audience. The New Yorker had an audience. The Weekly Standard had an audience.
  • If you were a writer, you got access to these audiences by contributing to the institutions. No one cared if you, John Smith, wrote a piece about Al Gore. But if your piece about Al Gore appeared in Washington Monthly, then suddenly you had an audience.
  • There were a handful of star writers for whom this wasn’t true: Maureen Dowd, Tom Wolfe, Joan Didion. Readers would follow these stars wherever they appeared. But they were the exceptions to the rule. And the only way to ascend to such exalted status was by writing a lot of great pieces for established institutions and slowly assembling your audience from theirs.
  • ...16 more annotations...
  • The internet stripped institutions of their gatekeeping powers, thus making it possible for anyone to publish—and making it inevitable that many writers would create audiences independent of media institutions.
  • The internet destroyed the apprenticeship system that had dominated American journalism for generations. Under the old system, an aspiring writer took a low-level job at a media institution and worked her way up the ladder until she was trusted enough to write.
  • Under the new system, people started their careers writing outside of institutions—on personal blogs—and then were hired by institutions on the strength of their work.
  • In practice, these outsiders were primarily hired not on the merits of their work, but because of the size of their audience.
  • what it really did was transform the nature of audiences. Once the internet existed it became inevitable that institutions would see their power to hold audiences wane while individual writers would have their power to build personal audiences explode.
  • this meant that institutions would begin to hire based on the size of a writer’s audience. Which meant that writers’ overriding professional imperative was to build an audience, since that was the key to advancement.
  • Twitter killed the blog and lowered the barrier to entry for new writers from “Must have a laptop, the ability to navigate WordPress, and the capacity to write paragraphs” to “Do you have an iPhone and the ability to string 20 words together? With or without punctuation?”
  • If you were able to build a big enough audience on Twitter, then media institutions fell all over themselves trying to hire you—because they believed that you would then bring your audience to them.2
  • If you were a writer for the Washington Post, or Wired, or the Saginaw Express, you had to build your own audience not to advance, but to avoid being replaced.
  • For journalists, audience wasn’t just status—it was professional capital. In fact, it was the most valuable professional capital.
  • Everything we just talked about was driven by the advertising model of media, which prized pageviews and unique users above all else. About a decade ago, that model started to fray around the edges,3 which caused a shift to the subscription model.
  • Today, if you’re a subscription publication, what Twitter gives you is growth opportunity. Twitter’s not the only channel for growth—there are lots of others, from TikTok to LinkedIn to YouTube to podcasts to search. But it’s an important one.
  • Twitter’s attack on Substack was an attack on the subscription model of journalism itself.
  • since media has already seen the ad-based model fall apart, it’s not clear what the alternative will be if the subscription model dies, too.
  • All of which is why having a major social media platform run by a capricious bad actor is suboptimal.
  • And why I think anyone else who’s concerned about the future of media ought to start hedging against Twitter. None of the direct hedges—Post, Mastodon, etc.—are viable yet. But tech history shows that these shifts can happen fairly quickly.
Javier E

Wielding Claims of 'Fake News,' Conservatives Take Aim at Mainstream Media - The New Yo... - 0 views

  • The C.I.A., the F.B.I. and the White House may all agree that Russia was behind the hacking that interfered with the election. But that was of no import to the website Breitbart News, which dismissed reports on the intelligence assessment as “left-wing fake news.”
  • Rush Limbaugh has diagnosed a more fundamental problem. “The fake news is the everyday news” in the mainstream media, he said on his radio show recently. “They just make it up.”
  • As reporters were walking out of a Trump rally this month in Orlando, Fla., a man heckled them with shouts of “Fake news!”
  • ...15 more annotations...
  • Until now, that term had been widely understood to refer to fabricated news accounts that are meant to spread virally online.
  • But conservative cable and radio personalities, top Republicans and even Mr. Trump himself, incredulous about suggestions that that fake stories may have helped swing the election, have appropriated the term and turned it against any news they see as hostile to their agenda.
  • In defining “fake news” so broadly and seeking to dilute its meaning, they are capitalizing on the declining credibility of all purveyors of information, one product of the country’s increasing political polarization.
  • “Over the years, we’ve effectively brainwashed the core of our audience to distrust anything that they disagree with. And now it’s gone too far,” said John Ziegler, a conservative radio host, who has been critical of what he sees as excessive partisanship by pundits. “Because the gatekeepers have lost all credibility in the minds of consumers, I don’t see how you reverse it.”
  • Others see a larger effort to slander the basic journalistic function of fact-checking. Nonpartisan websites like Snopes and Factcheck.org have found themselves maligned when they have disproved stories that had been flattering to conservatives.
  • “Fake news was a term specifically about people who purposely fabricated stories for clicks and revenue,” said David Mikkelson, the founder of Snopes, the myth-busting website. “Now it includes bad reporting, slanted journalism and outright propaganda. And I think we’re doing a disservice to lump all those things together.”
  • Journalists who work to separate fact from fiction see a dangerous conflation of stories that turn out to be wrong because of a legitimate misunderstanding with those whose clear intention is to deceive. A report, shared more than a million times on social media, that the pope had endorsed Mr. Trump was undeniably false. But was it “fake news” to report on data models that showed Hillary Clinton with overwhelming odds of winning the presidency? Are opinion articles fake if they cherry-pick facts to draw disputable conclusions?
  • conservatives’ appropriation of the “fake news” label is an effort to further erode the mainstream media’s claim to be a reliable and accurate source.
  • Conservative news media are now awash in the “fake news” condemnations
  • Many conservatives are pushing back at the outrage over fake news because they believe that liberals, unwilling to accept Mr. Trump’s victory, are attributing his triumph to nefarious external factors.
  • The right’s labeling of “fake news” evokes one of the most successful efforts by conservatives to reorient how Americans think about news media objectivity: the move by Fox News to brand its conservative-slanted coverage as “fair and balanced.” Traditionally, mainstream media outlets had thought of their own approach in those terms, viewing their coverage as strictly down the middle. Republicans often found that laughable.
  • “They’re trying to float anything they can find out there to discredit fact-checking,”
  • There are already efforts by highly partisan conservatives to claim that their fact-checking efforts are the same as those of independent outlets like Snopes, which employ research teams to dig into seemingly dubious claims.
  • Sean Hannity, the Fox News host, has aired “fact-checking” segments on his program. Michelle Malkin, the conservative columnist, has a web program, “Michelle Malkin Investigates,” in which she conducts her own investigative reporting.
  • The market in these divided times is undeniably ripe. “We now live in this fragmented media world where you can block people you disagree with. You can only be exposed to stories that make you feel good about what you want to believe,” Mr. Ziegler, the radio host, said. “Unfortunately, the truth is unpopular a lot. And a good fairy tale beats a harsh truth every time.”
maxwellokolo

Four New Names Officially Added to the Periodic Table of Elements - 1 views

  •  
    It's official. Chemistry's highest gatekeepers have accepted the newly proposed names for elements 113, 115, 117 and 118. Please welcome to the periodic table: Nihonium, Moscovium, Tennessine and Oganesson. Scientists first synthesized the new elements between 2002 and 2010, but it wasn't until December of 2015 that the International Union of Pure and Applied Chemistry officially recognized the discoveries.
Javier E

Google's new media apocalypse: How the search giant wants to accelerate the end of the ... - 0 views

  • Google is announcing that it wants to cut out the middleman—that is to say, other websites—and serve you content within its own lovely little walled garden. That sound you just heard was a bunch of media publishers rushing to book an extra appointment with their shrink.
  • Back when search, and not social media, ruled the internet, Google was the sun around which the news industry orbited. Getting to the top of Google’s results was the key that unlocked buckets of page views. Outlet after outlet spent countless hours trying to figure out how to game Google’s prized, secretive algorithm. Whole swaths of the industry were killed instantly if Google tweaked the algorithm.
  • Facebook is now the sun. Facebook is the company keeping everyone up at night. Facebook is the place shaping how stories get chosen, how they get written, how they are packaged and how they show up on its site. And Facebook does all of this with just as much secrecy and just as little accountability as Google did.
  • ...3 more annotations...
  • Facebook just opened up its Instant Articles feature to all publishers. The feature allows external outlets to publish their content directly onto Facebook’s platform, eliminating that pesky journey to their actual website. They can either place their own ads on the content or join a revenue-sharing program with Facebook. Facebook has touted this plan as one which provides a better user experience and has noted the ability for publishers to create ads on the platform as well.
  • The benefit to Facebook is obvious: It gets to keep people inside its house. They don’t have to leave for even a second. The publisher essentially has to accept this reality, sigh about the gradual death of websites and hope that everything works out on the financial side.
  • It’s all part of a much bigger story: that of how the internet, that supposed smasher of gates and leveler of playing fields, has coalesced around a mere handful of mega-giants in the space of just a couple of decades. The gates didn’t really come down. The identities of the gatekeepers just changed. Google, Facebook, Apple, Amazon
Javier E

"Wikipedia Is Not Truth" - The Dish | By Andrew Sullivan - The Daily Beast - 0 views

  • entriesOnPage.push("6a00d83451c45669e20168e7872016970c"); facebookButtons['6a00d83451c45669e20168e7872016970c'] = ''; twitterButtons['6a00d83451c45669e20168e7872016970c'] = ''; email permalink 20 Feb 2012 12:30 PM "Wikipedia Is Not Truth" Timothy Messer-Kruse tried to update the Wiki page on the Haymarket riot of 1886 to correct a long-standing inaccurate claim. Even though he's written two books and numerous articles on the subject, his changes were instantly rejected: I had cited the documents that proved my point, including verbatim testimony from the trial published online by the Library of Congress. I also noted one of my own peer-reviewed articles. One of the people who had assumed the role of keeper of this bit of history for Wikipedia quoted the Web site's "undue weight" policy, which states that "articles should not give minority views as much or as detailed a description as more popular views."
  • "Explain to me, then, how a 'minority' source with facts on its side would ever appear against a wrong 'majority' one?" I asked the Wiki-gatekeeper. ...  Another editor cheerfully tutored me in what this means: "Wikipedia is not 'truth,' Wikipedia is 'verifiability' of reliable sources. Hence, if most secondary sources which are taken as reliable happen to repeat a flawed account or description of something, Wikipedia will echo that."
Javier E

Give the Data to the People - NYTimes.com - 1 views

  • Johnson & Johnson announced that it was making all of its clinical trial data available to scientists around the world. It has hired my group, Yale University Open Data Access Project, or YODA, to fully oversee the release of the data. Everything in the company’s clinical research vaults, including unpublished raw data, will be available for independent review.
  • Today, more than half of the clinical trials in the United States, including many sponsored by academic and governmental institutions, are not published within two years of their completion. Often they are never published at all.
  • As a result, evidence-based medicine is, at best, based on only some of the evidence.
  • ...3 more annotations...
  • Even when studies are published, the actual data are usually not made available. End users of research — patients, doctors and policy makers — are implicitly told by a single group of researchers to “take our word for it.” They are often forced to accept the report without the prospect of other independent scientists’ reproducing the findings — a violation of a central tenet of the scientific method.
  • Companies worry that their competitors will benefit, that lawyers will take advantage, that incompetent scientists will misconstrue the data and come to mistaken conclusions.
  • We require those who want the data to submit a proposal and identify their research team, funding and any conflicts of interest. They have to complete a short course on responsible conduct and sign an agreement that restricts them to their proposed research question. Most important, they must agree to share whatever they find. And we exclude applicants who seek data for commercial or legal purposes. Our intent is not to be tough gatekeepers, but to ensure that the data are used in a transparent way and contribute to overall scientific knowledge.
Javier E

Guess Who Doesn't Fit In at Work - NYTimes.com - 0 views

  • ACROSS cultures and industries, managers strongly prize “cultural fit” — the idea that the best employees are like-minded.
  • One recent survey found that more than 80 percent of employers worldwide named cultural fit as a top hiring priority.
  • When done carefully, selecting new workers this way can make organizations more productive and profitable.
  • ...18 more annotations...
  • In the process, fit has become a catchall used to justify hiring people who are similar to decision makers and rejecting people who are not.
  • The concept of fit first gained traction in the 1980s. The original idea was that if companies hired individuals whose personalities and values — and not just their skills — meshed with an organization’s strategy, workers would feel more attached to their jobs, work harder and stay longer.
  • in many organizations, fit has gone rogue. I saw this firsthand while researching the hiring practices of the country’s top investment banks, management consultancies and law firms. I interviewed 120 decision makers and spent nine months observing
  • While résumés (and connections) influenced which applicants made it into the interview room, interviewers’ perceptions of fit strongly shaped who walked out with job offers.
  • Crucially, though, for these gatekeepers, fit was not about a match with organizational values. It was about personal fit. In these time- and team-intensive jobs, professionals at all levels of seniority reported wanting to hire people with whom they enjoyed hanging out and could foresee developing close relationships with
  • To judge fit, interviewers commonly relied on chemistry. “
  • Many used the “airport test.” As a managing director at an investment bank put it, “Would I want to be stuck in an airport in Minneapolis in a snowstorm with them?”
  • interviewers were primarily interested in new hires whose hobbies, hometowns and biographies matched their own. Bonding over rowing college crew, getting certified in scuba, sipping single-malt Scotches in the Highlands or dining at Michelin-starred restaurants was evidence of fit; sharing a love of teamwork or a passion for pleasing clients was not
  • it has become a common feature of American corporate culture. Employers routinely ask job applicants about their hobbies and what they like to do for fun, while a complementary self-help industry informs white-collar job seekers that chemistry, not qualifications, will win them an offer.
  • Selection based on personal fit can keep demographic and cultural diversity low
  • In the elite firms I studied, the types of shared experiences associated with fit typically required large investments of time and money.
  • Class-biased definitions of fit are one reason investment banks, management consulting firms and law firms are dominated by people from the highest socioeconomic backgrounds
  • Also, whether the industry is finance, high-tech or fashion, a good fit in most American corporations still tends to be stereotypically masculine.
  • Perhaps most important, it is easy to mistake rapport for skill. Just as they erroneously believe that they can accurately tell when someone is lying, people tend to be overly confident in their ability to spot talent. Unstructured interviews, which are the most popular hiring tools for American managers and the primary way they judge fit, are notoriously poor predictors of job performance.
  • Organizations that use cultural fit for competitive advantage tend to favor concrete tools like surveys and structured interviews that systematically test behaviors associated with increased performance and employee retention.
  • For managers who want to use cultural fit in a more productive way, I have several suggestions.
  • First, communicate a clear and consistent idea of what the organization’s culture is (and is not) to potential employees. Second, make sure the definition of cultural fit is closely aligned with business goals. Ideally, fit should be based on data-driven analysis of what types of values, traits and behaviors actually predict on-the-job success. Third, create formal procedures like checklists for measuring fit, so that assessment is not left up to the eyes (and extracurriculars) of the beholder.
  • But cultural fit has become a new form of discrimination that keeps demographic and cultural diversity down
Javier E

Is Amazon Creating a Cultural Monopoly? - The New Yorker - 0 views

  • “We are not experts in antitrust law, and this letter is not a legal brief. But we are authors with a deep, collective experience in this field, and we agree with the authorities in economics and law who have asserted that Amazon’s dominant position makes it a monopoly as a seller of books and a monopsony as a buyer of books.” (A monopoly is a company that has extraordinary control over supply as a seller of goods to consumers; a monopsony has extraordinary control over suppliers as a buyer of their goods.)
  • a highly unorthodox argument: that, even though Amazon’s activities tend to reduce book prices, which is considered good for consumers, they ultimately hurt consumers
  • U.S. courts evaluate antitrust issues very differently, nowadays, than they did a hundred years ago, just after antitrust laws were established to keep big corporations from abusing their power. Back then, judges tended to be largely concerned with protecting suppliers from being squeezed by retailers, which meant that, if a corporation exercised monopoly power to push prices down, hurting suppliers, the company could easily lose an antitrust case. But by the nineteen-eighties, the judiciary’s focus had shifted to protecting consumers, leading courts to become more prone to ruling in favor of the corporation, on the grounds that lower prices are good for consumers.
  • ...7 more annotations...
  • specific argument—that Amazon’s actions are bad for consumers because they make our world less intellectually active and diverse—is unorthodox in its resort to cultural and artistic grounds. But it can be read as the inverse of a case like Leegin v. PSKS: that lower prices for worse products could be bad for consumers—and perhaps constitute an antitrust violation.
  • if higher prices corresponded with better products, that could be good for consumers—and not necessarily an antitrust violation.
  • Their argument is this: Amazon has used its market power both to influence which books get attention (by featuring them more prominently on its Web site, a practice I’ve also written about) and, in some cases, to drive prices lower. These practices, the authors argue, squeeze publishers, which makes them more risk-averse in deciding which books to publish. As a result, they claim, publishers have been “dropping some midlist authors and not publishing certain riskier books, effectively silencing many voices.” And this is bad not only for the non-famous writers who go unpublished, but for their would-be readers, who are denied the ability to hear those voices.
  • While it may be attractive, on a philosophical level, to argue that Amazon is bad for us because it makes our culture poorer, measuring that effect would be difficult, if not impossible. How would one go about valuing an unpublished masterpiece by an unknown author? This is further complicated by the fact that Amazon makes it easy for authors to self-publish and have their work be seen, without having to go through such traditional gatekeepers as agents and publishers; Amazon might argue that this allows for more free flow of information and ideas
  • Furthermore, U.S. law is concerned with diversity in media, Crane said, but that tends to be regulated through the Federal Communications Commission, not the Justice Department.
  • it’s quite possible the Justice Department will read the Authors United letter and dismiss it as uninformed. But even if that happens, Preston said, it will have been worthwhile for the writers to have made their case.
  • Authors United’s larger mission, he told me, was this: “We hope to show the public that getting products faster and cheaper isn’t necessarily the greatest good. It comes at a human cost.”
Javier E

When the Internet Thinks It Knows You - NYTimes.com - 1 views

  • There is a new group of gatekeepers in town, and this time, they’re not people, they’re code.
  • when personalization affects not just what you buy but how you think, different issues arise. Democracy depends on the citizen’s ability to engage with multiple viewpoints; the Internet limits such engagement when it offers up only information that reflects your already established point of view. While it’s sometimes convenient to see only what you want to see, it’s critical at other times that you see things that you don’t.
  • increasingly, and nearly invisibly, our searches for information are being personalized too.
  • ...3 more annotations...
  • Today’s Internet giants — Google, Facebook, Yahoo and Microsoft — see the remarkable rise of available information as an opportunity. If they can provide services that sift though the data and supply us with the most personally relevant and appealing results, they’ll get the most users and the most ad views. As a result, they’re racing to offer personalized filters that show us the Internet that they think we want to see. These filters, in effect, control and limit the information that reaches our screens.
  • At Facebook, “relevance” is virtually the sole criterion that determines what users see. Focusing on the most personally relevant news — the squirrel — is a great business strategy. But it leaves us staring at our front yard instead of reading about suffering, genocide and revolution.
  • It is in our collective interest to ensure that the Internet lives up to its potential as a revolutionary connective medium. This won’t happen if we’re all sealed off in our own personalized online worlds.
Javier E

'Filter Bubble': Pariser on Web Personalization, Privacy - TIME - 0 views

  • the World Wide Web came along and blew the gatekeepers away. Suddenly anyone with a computer and an Internet connection could take part in the conversation. Countless viewpoints bloomed. There was no longer a mainstream; instead, there was an ocean of information, one in which Web users were free to swim.
  • Where once Google delivered search results based on an algorithm that was identical for everyone, now what we see when we enter a term in the big box depends on who we are, where we are and what we are. Facebook has long since done the same thing for its all-important News Feed: you'll see different status updates and stories float to the top based on the data Mark Zuckerberg and company have on you. The universal Web is a thing of the past. Instead, as Pariser writes, we've been left "isolated in a web of one" — and, given that we increasingly view the world through the lens of the Internet, that change has frightening consequences for the media, community and even democracy.
  • Google has begun personalizing search results — something it does even if you're not signed into your Google account. (A Google engineer told Pariser that the company uses 57 different signals to shape individual search results, including what kind of browser you're using and where you are.)
  • ...1 more annotation...
  • Yahoo! News — the biggest news site on the Web — is personalized, and even mainstream sites like those of the New York Times and the Washington Post are giving more space to personalized recommendations. As Google executive chairman Eric Schmidt has said, "It will be very hard for people to watch or consume something that is not tailored for them."
Javier E

College for Grown-Ups - NYTimes.com - 0 views

  • If we were starting from zero, we probably wouldn’t design colleges as age-segregated playgrounds in which teenagers and very young adults are given free rein to spend their time more or less as they choose. Yet this is the reality.
  • Rethinking the expectation that applicants to selective colleges be fresh out of high school would go far in reducing risk for young people while better protecting everyone’s college investment. Some of this rethinking is already underway. Temporarily delaying college for a year or two after high school is now becoming respectable among the admissions gatekeepers at top schools. Massive online open courses (MOOCs) and other forms of online learning make it possible to experience fragments of an elite education at little or no cost.
  • people are tinkering further with conventional campus models. The Minerva Project, a San Francisco start-up with offices two blocks from Twitter, offers classic seminar-style college courses via a sophisticated interactive online learning platform and accompanies them with residencies in cities all over the world. Nearby in the SoMa district, Dev Bootcamp, a 19-week immersive curriculum that trains people of all ages for jobs in the tech industry, is a popular alternative. Some successfully employed graduates brag of bypassing college altogether.
  • ...2 more annotations...
  • At Stanford, where I teach, an idea still in the concept phase developed by a student-led team in the university’s Hasso Plattner Institute of Design calls for the replacement of four consecutive college years in young adulthood with multiple residencies distributed over a lifetime. What the designers call Open Loop University would grant students admitted to Stanford multiple years to spend on campus, along with advisers to help them time those years strategically in light of their personal development and career ambitions. Today’s arbitrarily segregated world of teenagers and young adults would become an ever-replenished intergenerational community of purposeful learners.
  • the status quo is not sustainable. Unrelenting demand for better-educated workers, rapidly developing technological capacity to support learning digitally and the soaring costs of conventional campus life are driving us toward substantial change.
charlottedonoho

Who's to blame when fake science gets published? - 1 views

  • The now-discredited study got headlines because it offered hope. It seemed to prove that our sense of empathy, our basic humanity, could overcome prejudice and bridge seemingly irreconcilable differences. It was heartwarming, and it was utter bunkum. The good news is that this particular case of scientific fraud isn't going to do much damage to anyone but the people who concocted and published the study. The bad news is that the alleged deception is a symptom of a weakness at the heart of the scientific establishment.
  • When it was published in Science magazine last December, the research attracted academic as well as media attention; it seemed to provide solid evidence that increasing contact between minority and majority groups could reduce prejudice.
  • But in May, other researchers tried to reproduce the study using the same methods, and failed. Upon closer examination, they uncovered a number of devastating "irregularities" - statistical quirks and troubling patterns - that strongly implied that the whole LaCour/Green study was based upon made-up data.
  • ...6 more annotations...
  • The data hit the fan, at which point Green distanced himself from the survey and called for the Science article to be retracted. The professor even told Retraction Watch, the website that broke the story, that all he'd really done was help LaCour write up the findings.
  • Science magazine didn't shoulder any blame, either. In a statement, editor in chief Marcia McNutt said the magazine was essentially helpless against the depredations of a clever hoaxer: "No peer review process is perfect, and in fact it is very difficult for peer reviewers to detect artful fraud."
  • This is, unfortunately, accurate. In a scientific collaboration, a smart grad student can pull the wool over his adviser's eyes - or vice versa. And if close collaborators aren't going to catch the problem, it's no surprise that outside reviewers dragooned into critiquing the research for a journal won't catch it either. A modern science article rests on a foundation of trust.
  • If the process can't catch such obvious fraud - a hoax the perpetrators probably thought wouldn't work - it's no wonder that so many scientists feel emboldened to sneak a plagiarised passage or two past the gatekeepers.
  • Major peer-review journals tend to accept big, surprising, headline-grabbing results when those are precisely the ones that are most likely to be wrong.
  • Despite the artful passing of the buck by LaCour's senior colleague and the editors of Science magazine, affairs like this are seldom truly the product of a single dishonest grad student. Scientific publishers and veteran scientists - even when they don't take an active part in deception - must recognise that they are ultimately responsible for the culture producing the steady drip-drip-drip of falsification, exaggeration and outright fabrication eroding the discipline they serve.
Javier E

Jonathan Franzen Is Fine With All of It - The New York Times - 0 views

  • If you’re in a state of perpetual fear of losing market share for you as a person, it’s just the wrong mind-set to move through the world with.” Meaning that if your goal is to get liked and retweeted, then you are perhaps molding yourself into the kind of person you believe will get those things, whether or not that person resembles the actual you. The writer’s job is to say things that are uncomfortable and hard to reduce. Why would a writer mold himself into a product?
  • And why couldn’t people hear him about the social effects this would have? “The internet is all about destroying the elite, destroying the gatekeepers,” he said. “The people know best. You take that to its conclusion, and you get Donald Trump. What do those Washington insiders know? What does the elite know?
  • So he decided to withdraw from it all. After publicity for “The Corrections” ended, he decided he would no longer read about himself — not reviews, not think pieces, not stories, and then, as they came, not status updates and not tweets. He didn’t want to hear reaction to his work. He didn’t want to see the myriad ways he was being misunderstood. He didn’t want to know what the hashtags were.
  • ...7 more annotations...
  • I stopped reading reviews because I noticed all I remember is the negatives. Whatever fleeting pleasure you have in someone applying a laudatory adjective to your book is totally washed away by the unpleasantness of remembering the negative things for the rest of your life verbatim.
  • Franzen thinks that there’s no way for a writer to do good work — to write something that can be called “consuming and extraordinarily moving” — without putting a fence around yourself so that you can control the input you encounter. So that you could have a thought that isn’t subject to pushback all the time from anyone who has ever met you or heard of you or expressed interest in hearing from you. Without allowing yourself to think for a minute.
  • It’s not just writers. It’s everyone. The writer is just an extreme case of something everyone struggles with. “On the one hand, to function well, you have to believe in yourself and your abilities and summon enormous confidence from somewhere. On the other hand, to write well, or just to be a good person, you need to be able to doubt yourself — to entertain the possibility that you’re wrong about everything, that you don’t know everything, and to have sympathy with people whose lives and beliefs and perspectives are very different from yours.”
  • “This balancing act” — the confidence that you know everything plus the ability to believe that you don’t — “only works, or works best, if you reserve a private space for it.”
  • Can you write clearly about something that you don’t yourself swim in? Don’t you have to endure it and hate it most of the time like the rest of us?
  • his answer was no. No. No, you absolutely don’t. You can miss a meme, and nothing really changes. You can be called fragile, and you will live. “I’m pretty much the opposite of fragile. I don’t need internet engagement to make me vulnerable. Real writing makes me — makes anyone doing it — vulnerable.”
  • Has anyone considered that the interaction is the fragility? Has anyone considered that letting other people define how you fill your day and what they fill your head with — a passive, postmodern stream of other people’s thoughts — is the fragility?
Javier E

In modern mating, sex isn't the only thing that's cheap - The Washington Post - 1 views

  • Regnerus relies on the concept of sexual economics, in which mating is seen as a marketplace. In this view, women are gatekeepers to a limited, highly desired product: sex. In exchange for access to this product, men proffer commitment, fidelity and resources.
  • Regnerus believes that the sharp drop in the value of sex has shifted the market, even its more conservative parts, leading to a massive overall slowdown in the creation of committed relationships like marriage, in large part because men see less of a need to make themselves into appealing long-term partners.
  • among younger women, especially those who want that sort of traditional relationship, there increasingly seems to be a vague dissatisfaction with the state of things. Why, when women have gained so much power, are we so often at impasse in our romantic relationships? Why do men our age seem so unmotivated to grow up and so ambivalent about committing? As uncomfortable as it may be to contemplate, the shifts this book describes may provide an inkling of an explanation.
  • ...2 more annotations...
  • When it comes to commonly held modern ideals — of gender egalitarianism, individualism, the assumption that men might seek to improve themselves even without outside prodding — his response is skepticism bordering on exasperation. “In the domain of sex and relationships, men will act as nobly as women collectively demand,” he writes. “This is an aggravating statement for women to read, no doubt. They do not want to be responsible for ‘raising’ men. But it is realistic.”
  • Throughout his book, Regnerus prods the reader to be skeptical of utopianism and see the world as it is. It’s a useful, if unpleasant, reminder for an era in which our goals seem both loftier and further out of reach than ever
Javier E

A Voter Revolt Against 'Shareholder Value' - WSJ - 0 views

  • a Feb. 29 quotation from Leslie Moonves, chairman of CBS, CBS -1.76 % that sums up everything wrong with today’s media culture—and with corporate America.
  • Reflecting on the Trump phenomenon at a media and technology conference, Mr. Moonves said that “It may not be good for America, but it’s damn good for CBS.”
  • Mr. Moonves is saying that CBS’s only responsibility is to maximize profits, not only in its entertainment division, but also in its news operation
  • ...21 more annotations...
  • He knows that what his network is doing is against the national interest. He has just enough conscience to be aware that it is “terrible,” but not nearly enough to stop doing it. It might impair shareholder value, after all.
  • Mr. Moonves is suggesting that there is no difference in principle between entertainment and news. Both should be judged by the same standard—ratings. If policy speeches don’t attract large enough audiences, cut to a Trump rally.
  • If the leading purveyors of broadcast journalism make no distinction between news and entertainment, then who can blame viewers for seeing no difference between entertainment and politics?
  • American politicians and parties have used entertainment to draw audiences for the better part of two centuries. But there used to be countervailing forces, including prestigious broadcast news organizations. Not anymore. Once these organizations served as gatekeepers; now they are open-door enablers.
  • They are all in the grip of the same misunderstanding, that their business begins and ends with maximizing shareholder value.
  • They may believe that this is a statutory requirement or a fiduciary duty. If so, they are mistaken
  • It is Milton Friedman’s theory. “There is one and only one social responsibility of business,” he wrote in “Capitalism and Freedom,” “to use its resources and engage in activities designed to increase its profits.”
  • corporate law imposes no enforceable legal duty to maximize either profits or share prices.
  • As a policy argument, Friedman’s thesis flunks key empirical tests
  • And it is not politically sustainable. This is the clear meaning of the 2016 presidential election.
  • during the 1970s, inflation, recession, a stagnant stock market and rising competition from abroad created an opening for Friedman’s theory, which soon dominated corporate boardrooms.
  • In the name of maximizing shareholder value, corporations moved plants and jobs around the world, paid the lowest wages they could get away with, and scheduled work assignments to maintain managerial “flexibility,” whatever the consequences for workers’ families. Meanwhile, their lobbyists engineered a myriad of special interest breaks in the corporate tax code.
  • Now we can see what four decades of pursuing shareholder value at the expense of everything else has yielded
  • Public confidence in corporations is at rock-bottom, and public anger is sky-high
  • The revolt against the corporate economic agenda—free trade, a generous immigration policy, lower corporate taxes and the rest—is sweeping the country.
  • As the Republican rank and file has turned against corporations and New Democrats have given ground to left-wing populists, big business has been left politically homeless.
  • It will take corporate America a long time to climb out of this self-created hole.
  • Its first step should be to back long-overdue proposals for improving workers’ lives and incomes. Paid family leave is an idea whose time has come; so is a catch-up increase in the federal minimum wage; so are stable and predictable schedules for part-time workers.
  • Allowing workers to share in profits and productivity increases would be another good step.
  • Above all, corporate leaders should grasp the distinction between immediate gain and self-interest rightly understood. Pushing for the last increment of profit over the next quarter and the one after that comes at the expense of the strategies that can leave firms best positioned for the future.
  • America needs a new generation of corporate statesmen.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

He Wants to Save Classics From Whiteness. Can the Field Survive? - The New York Times - 0 views

  • Padilla laid out an indictment of his field. “If one were intentionally to design a discipline whose institutional organs and gatekeeping protocols were explicitly aimed at disavowing the legitimate status of scholars of color,” he said, “one could not do better than what classics has done.”
  • Padilla believes that classics is so entangled with white supremacy as to be inseparable from it. “Far from being extrinsic to the study of Greco-Roman antiquity,” he has written, “the production of whiteness turns on closer examination to reside in the very marrows of classics.”
  • Rather than kowtowing to criticism, Williams said, “maybe we should start defending our discipline.” She protested that it was imperative to stand up for the classics as the political, literary and philosophical foundation of European and American culture: “It’s Western civilization. It matters because it’s the West.” Hadn’t classics given us the concepts of liberty, equality and democracy?
  • ...46 more annotations...
  • Williams ceded the microphone, and Padilla was able to speak. “Here’s what I have to say about the vision of classics that you outlined,” he said. “I want nothing to do with it. I hope the field dies that you’ve outlined, and that it dies as swiftly as possible.”
  • “I believe in merit. I don’t look at the color of the author.” She pointed a finger in Padilla’s direction. “You may have got your job because you’re Black,” Williams said, “but I would prefer to think you got your job because of merit.”
  • What he did find was a slim blue-and-white textbook titled “How People Lived in Ancient Greece and Rome.” “Western civilization was formed from the union of early Greek wisdom and the highly organized legal minds of early Rome,” the book began. “The Greek belief in a person’s ability to use his powers of reason, coupled with Roman faith in military strength, produced a result that has come to us as a legacy, or gift from the past.” Thirty years later, Padilla can still recite those opening lines.
  • In 2017, he published a paper in the journal Classical Antiquity that compared evidence from antiquity and the Black Atlantic to draw a more coherent picture of the religious life of the Roman enslaved. “It will not do merely to adopt a pose of ‘righteous indignation’ at the distortions and gaps in the archive,” he wrote. “There are tools available for the effective recovery of the religious experiences of the enslaved, provided we work with these tools carefully and honestly.”
  • Padilla sensed that his pursuit of classics had displaced other parts of his identity, just as classics and “Western civilization” had displaced other cultures and forms of knowledge. Recovering them would be essential to dismantling the white-supremacist framework in which both he and classics had become trapped. “I had to actively engage in the decolonization of my mind,” he told me.
  • He also gravitated toward contemporary scholars like José Esteban Muñoz, Lorgia García Peña and Saidiya Hartman, who speak of race not as a physical fact but as a ghostly system o
  • In response to rising anti-immigrant sentiment in Europe and the United States, Mary Beard, perhaps the most famous classicist alive, wrote in The Wall Street Journal that the Romans “would have been puzzled by our modern problems with migration and asylum,” because the empire was founded on the “principles of incorporation and of the free movement of people.”
  • In November 2015, he wrote an essay for Eidolon, an online classics journal, clarifying that in Rome, as in the United States, paeans to multiculturalism coexisted with hatred of foreigners. Defending a client in court, Cicero argued that “denying foreigners access to our city is patently inhumane,” but ancient authors also recount the expulsions of whole “suspect” populations, including a roundup of Jews in 139 B.C., who were not considered “suitable enough to live alongside Romans.”
  • The job of classicists is not to “point out the howlers,” he said on a 2017 panel. “To simply take the position of the teacher, the qualified classicist who knows things and can point to these mistakes, is not sufficient.”
  • Dismantling structures of power that have been shored up by the classical tradition will require more than fact-checking; it will require writing an entirely new story about antiquity, and about who we are today
  • To find that story, Padilla is advocating reforms that would “explode the canon” and “overhaul the discipline from nuts to bolts,” including doing away with the label “classics” altogether.
  • . “What I want to be thinking about in the next few weeks,” he told them, “is how we can be telling the story of the early Roman Empire not just through a variety of sources but through a variety of persons.” He asked the students to consider the lives behind the identities he had assigned them, and the way those lives had been shaped by the machinery of empire, which, through military conquest, enslavement and trade, creates the conditions for the large-scale movement of human beings.
  • ultimately, he decided that leaving enslaved characters out of the role play was an act of care. “I’m not yet ready to turn to a student and say, ‘You are going to be a slave.’”
  • Privately, even some sympathetic classicists worry that Padilla’s approach will only hasten the field’s decline. “I’ve spoken to undergrad majors who say that they feel ashamed to tell their friends they’re studying classics,”
  • “I very much admire Dan-el’s work, and like him, I deplore the lack of diversity in the classical profession,” Mary Beard told me via email. But “to ‘condemn’ classical culture would be as simplistic as to offer it unconditional admiration.”
  • In a 2019 talk, Beard argued that “although classics may become politicized, it doesn’t actually have a politics,” meaning that, like the Bible, the classical tradition is a language of authority — a vocabulary that can be used for good or ill by would-be emancipators and oppressors alike.
  • Over the centuries, classical civilization has acted as a model for people of many backgrounds, who turned it into a matrix through which they formed and debated ideas about beauty, ethics, power, nature, selfhood, citizenship and, of course, race
  • Anthony Grafton, the great Renaissance scholar, put it this way in his preface to “The Classical Tradition”: “An exhaustive exposition of the ways in which the world has defined itself with regard to Greco-Roman antiquity would be nothing less than a comprehensive history of the world.”
  • Classics as we know it today is a creation of the 18th and 19th centuries. During that period, as European universities emancipated themselves from the control of the church, the study of Greece and Rome gave the Continent its new, secular origin story. Greek and Latin writings emerged as a competitor to the Bible’s moral authority, which lent them a liberatory power
  • Historians stress that such ideas cannot be separated from the discourses of nationalism, colorism and progress that were taking shape during the modern colonial period, as Europeans came into contact with other peoples and their traditions. “The whiter the body is, the more beautiful it is,” Winkelmann wrote.
  • While Renaissance scholars were fascinated by the multiplicity of cultures in the ancient world, Enlightenment thinkers created a hierarchy with Greece and Rome, coded as white, on top, and everything else below.
  • Jefferson, along with most wealthy young men of his time, studied classics at college, where students often spent half their time reading and translating Greek and Roman texts. “Next to Christianity,” writes Caroline Winterer, a historian at Stanford, “the central intellectual project in America before the late 19th century was classicism.
  • Of the 2.5 million people living in America in 1776, perhaps only 3,000 had gone to college, but that number included many of the founders
  • They saw classical civilization as uniquely educative — a “lamp of experience,” in the words of Patrick Henry, that could light the path to a more perfect union. However true it was, subsequent generations would come to believe, as Hannah Arendt wrote in “On Revolution,” that “without the classical example … none of the men of the Revolution on either side of the Atlantic would have possessed the courage for what then turned out to be unprecedented action.”
  • Comparisons between the United States and the Roman Empire became popular as the country emerged as a global power. Even after Latin and Greek were struck from college-entrance exams, the proliferation of courses on “great books” and Western civilization, in which classical texts were read in translation, helped create a coherent national story after the shocks of industrialization and global warfare.
  • even as the classics were pulled apart, laughed at and transformed, they continued to form the raw material with which many artists shaped their visions of modernity.
  • Over the centuries, thinkers as disparate as John Adams and Simone Weil have likened classical antiquity to a mirror. Generations of intellectuals, among them feminist, queer and Black scholars, have seen something of themselves in classical texts, flashes of recognition that held a kind of liberatory promise
  • The language that is used to describe the presence of classical antiquity in the world today — the classical tradition, legacy or heritage — contains within it the idea of a special, quasi-genetic relationship. In his lecture “There Is No Such Thing as Western Civilization,” Kwame Anthony Appiah (this magazine’s Ethicist columnist) mockingly describes the belief in such a kinship as the belief in a “golden nugget” of insight — a precious birthright and shimmering sign of greatness — that white Americans and Europeans imagine has been passed down to them from the ancients.
  • To see classics the way Padilla sees it means breaking the mirror; it means condemning the classical legacy as one of the most harmful stories we’ve told ourselves
  • Padilla is wary of colleagues who cite the radical uses of classics as a way to forestall change; he believes that such examples have been outmatched by the field’s long alliance with the forces of dominance and oppression.
  • Classics and whiteness are the bones and sinew of the same body; they grew strong together, and they may have to die together. Classics deserves to survive only if it can become “a site of contestation” for the communities who have been denigrated by it in the past.
  • if classics fails his test, Padilla and others are ready to give it up. “I would get rid of classics altogether,” Walter Scheidel, another of Padilla’s former advisers at Stanford, told me. “I don’t think it should exist as an academic field.”
  • One way to get rid of classics would be to dissolve its faculties and reassign their members to history, archaeology and language departments.
  • many classicists are advocating softer approaches to reforming the discipline, placing the emphasis on expanding its borders. Schools including Howard and Emory have integrated classics with Ancient Mediterranean studies, turning to look across the sea at Egypt, Anatolia, the Levant and North Africa. The change is a declaration of purpose: to leave behind the hierarchies of the Enlightenment and to move back toward the Renaissance model of the ancient world as a place of diversity and mixture.
  • Ian Morris put it more bluntly. “Classics is a Euro-American foundation myth,” Morris said to me. “Do we really want that sort of thing?”
  • There’s a more interesting story to be told about the history of what we call the West, the history of humanity, without valorizing particular cultures in it,” said Josephine Quinn, a professor of ancient history at Oxford. “It seems to me the really crucial mover in history is always the relationship between people, between cultures.”
  • “In some moods, I feel that this is just a moment of despair, and people are trying to find significance even if it only comes from self-accusation,” he told me. “I’m not sure that there is a discipline that is exempt from the fact that it is part of the history of this country. How distinctly wicked is classics? I don’t know that it is.”
  • “One of the dubious successes of my generation is that it did break the canon,” Richlin told me. “I don’t think we could believe at the time that we would be putting ourselves out of business, but we did.” She added: “If they blew up the classics departments, that would really be the end.”
  • Padilla, like Douglass, now sees the moment of absorption into the classical, literary tradition as simultaneous with his apprehension of racial difference; he can no longer find pride or comfort in having used it to bring himself out of poverty.
  • “Claiming dignity within this system of structural oppression,” Padilla has said, “requires full buy-in into its logic of valuation.” He refuses to “praise the architects of that trauma as having done right by you at the end.”
  • Last June, as racial-justice protests unfolded across the nation, Padilla turned his attention to arenas beyond classics. He and his co-authors — the astrophysicist Jenny Greene, the literary theorist Andrew Cole and the poet Tracy K. Smith — began writing their open letter to Princeton with 48 proposals for reform. “Anti-Blackness is foundational to America,” the letter began. “Indifference to the effects of racism on this campus has allowed legitimate demands for institutional support and redress in the face of microaggression and outright racist incidents to go long unmet.”
  • Padilla believes that the uproar over free speech is misguided. “I don’t see things like free speech or the exchange of ideas as ends in themselves,” he told me. “I have to be honest about that. I see them as a means to the end of human flourishing.”
  • “There is a certain kind of classicist who will look on what transpired and say, ‘Oh, that’s not us,’” Padilla said when we spoke recently. “What is of interest to me is why is it so imperative for classicists of a certain stripe to make this discursive move? ‘This is not us.’
  • Joel Christensen, the Brandeis professor, now feels that it is his “moral and ethical and intellectual responsibility” to teach classics in a way that exposes its racist history. “Otherwise we’re just participating in propaganda,”
  • Christensen, who is 42, was in graduate school before he had his “crisis of faith,” and he understands the fear that many classicists may experience at being asked to rewrite the narrative of their life’s work. But, he warned, “that future is coming, with or without Dan-el.”
  • On Jan. 6, Padilla turned on the television minutes after the windows of the Capitol were broken. In the crowd, he saw a man in a Greek helmet with TRUMP 2020 painted in white. He saw a man in a T-shirt bearing a golden eagle on a fasces — symbols of Roman law and governance — below the logo 6MWE, which stands for “Six Million Wasn’t Enough,
1 - 20 of 23 Next ›
Showing 20 items per page