Skip to main content

Home/ TOK Friends/ Group items matching "publishing" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
Javier E

Many Academics Are Eager to Publish in Worthless Journals - The New York Times - 0 views

  • it’s increasingly clear that many academics know exactly what they’re getting into, which explains why these journals have proliferated despite wide criticism. The relationship is less predator and prey, some experts say, than a new and ugly symbiosis.
  • “When hundreds of thousands of publications appear in predatory journals, it stretches credulity to believe all the authors and universities they work for are victims,” Derek Pyne, an economics professor at Thompson Rivers University in British Columbia, wrote in a op-ed published in the Ottawa Citizen, a Canadian newspaper.
  • The number of such journals has exploded to more than 10,000 in recent years, with nearly as many predatory as legitimate ones. “Predatory publishing is becoming an organized industry,” wrote one group of critics in a paper in Nature
  • ...7 more annotations...
  • Many of these journals have names that closely resemble those of established publications, making them easily mistakable. There is the Journal of Economics and Finance, published by Springer, but now also the Journal of Finance and Economics. There is the Journal of Engineering Technology, put out by the American Society for Engineering Education, but now another called the GSTF Journal of Engineering Technology.
  • Predatory journals have few expenses, since they do not seriously review papers that are submitted and they publish only online. They blast emails to academics, inviting them to publish. And the journals often advertise on their websites that they are indexed by Google Scholar. Often that is correct — but Google Scholar does not vet the journals it indexes.
  • The journals are giving rise to a wider ecosystem of pseudo science. For the academic who wants to add credentials to a resume, for instance, publishers also hold meetings where, for a hefty fee, you can be listed as a presenter — whether you actually attend the meeting or not.
  • Participating in such dubious enterprises carries few risks. Dr. Pyne, who did a study of his colleagues publications, reports that faculty members at his school who got promoted last year had at least four papers in questionable journals. All but one academic in 10 who won a School of Business and Economics award had published papers in these journals. One had 10 such articles.
  • Academics get rewarded with promotions when they stuff their resumes with articles like these, Dr. Pyne concluded. There are few or no adverse consequences — in fact, the rewards for publishing in predatory journals were greater than for publishing in legitimate ones.
  • Some say the academic system bears much of the blame for the rise of predatory journals, demanding publications even from teachers at places without real resources for research and where they may have little time apart from teaching.At Queensborough, faculty members typically teach nine courses per year. At four-year colleges, faculty may teach four to six courses a year.
  • Recently a group of researchers who invented a fake academic: Anna O. Szust. The name in Polish means fraudster. Dr. Szust applied to legitimate and predatory journals asking to be an editor. She supplied a résumé in which her publications and degrees were total fabrications, as were the names of the publishers of the books she said she had contributed to.The legitimate journals rejected her application immediately. But 48 out of 360 questionable journals made her an editor. Four made her editor in chief. One journal sent her an email saying, “It’s our pleasure to add your name as our editor in chief for the journal with no responsibilities.”
sissij

Watch How Casually False Claims are Published: New York Times and Nicholas Lemann Edition - 1 views

  •  wrote my favorite sentence about this whole affair, one which I often quoted in my speeches to great audience laughter: “there are only three possible explanations for the Snowden heist: 1) It was a Russian espionage operation; 2) It was a Chinese espionage operation; or 3) It was a joint Sino-Russian operation.”
  • demanding that they only publish those which expose information necessary to inform the public debate: precisely because he did not want to destroy NSA programs he believes are justifiable.
  • As is true of most leaks – from the routine to the spectacular – those publishing decisions rested solely in the hands of the media outlets and their teams of reporters, editors and lawyers.
  • ...6 more annotations...
  • There have of course been some stories where my calculation of what is not public interest differs from that of reporters, but it is for this precise reason that publication decisions were entrusted to journalists and their editors.
  • it is so often the case that the most influential media outlets publish factually false statements using the most authoritative tones.
  • Ironically, the most controversial Snowden stories – the type his critics cite as the ones that should not have been published because they exposed sensitive national security secrets – were often the ones the NYT itself decided to publish, such as its very controversial exposé on how NSA spied on China’s Huawei.
  • Snowden didn’t decide what stayed secret. The press did.
  • journalist-driven process that determined which documents got published
  • But Snowden never said anything like that.
  •  
    The reporting on Snowden shows that how bad our new system can be. Although it is arguable that this article can be false as well because we can never know the exact truth except the Snowden himself. Our flaws in logic and perception makes us very vulnerable to bad news. There was time in social media called the "Yellow News". During that time, the news publications weren't taking their responsibility as a guide to the general population. We surely need better news and prevent the era that we publish our news based on its "hotness" instead of accuracy. --Sissi (1/12/2017)
Javier E

Google's new media apocalypse: How the search giant wants to accelerate the end of the age of websites - Salon.com - 0 views

  • Google is announcing that it wants to cut out the middleman—that is to say, other websites—and serve you content within its own lovely little walled garden. That sound you just heard was a bunch of media publishers rushing to book an extra appointment with their shrink.
  • Back when search, and not social media, ruled the internet, Google was the sun around which the news industry orbited. Getting to the top of Google’s results was the key that unlocked buckets of page views. Outlet after outlet spent countless hours trying to figure out how to game Google’s prized, secretive algorithm. Whole swaths of the industry were killed instantly if Google tweaked the algorithm.
  • Facebook is now the sun. Facebook is the company keeping everyone up at night. Facebook is the place shaping how stories get chosen, how they get written, how they are packaged and how they show up on its site. And Facebook does all of this with just as much secrecy and just as little accountability as Google did.
  • ...3 more annotations...
  • Facebook just opened up its Instant Articles feature to all publishers. The feature allows external outlets to publish their content directly onto Facebook’s platform, eliminating that pesky journey to their actual website. They can either place their own ads on the content or join a revenue-sharing program with Facebook. Facebook has touted this plan as one which provides a better user experience and has noted the ability for publishers to create ads on the platform as well.
  • The benefit to Facebook is obvious: It gets to keep people inside its house. They don’t have to leave for even a second. The publisher essentially has to accept this reality, sigh about the gradual death of websites and hope that everything works out on the financial side.
  • It’s all part of a much bigger story: that of how the internet, that supposed smasher of gates and leveler of playing fields, has coalesced around a mere handful of mega-giants in the space of just a couple of decades. The gates didn’t really come down. The identities of the gatekeepers just changed. Google, Facebook, Apple, Amazon
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
sissij

Believe It Or Not, Most Published Research Findings Are Probably False | Big Think - 0 views

  • but this has come with the side effect of a toxic combination of confirmation bias and Google, enabling us to easily find a study to support whatever it is that we already believe, without bothering to so much as look at research that might challenge our position
  • Indeed, this is a statement oft-used by fans of pseudoscience who take the claim at face value, without applying the principles behind it to their own evidence.
  • at present, most published findings are likely to be incorrect.
  • ...6 more annotations...
  • If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30 percent of the time.
  • The problem is being tackled head on in the field of psychology which was shaken by the Stapel affair in which one Dutch researcher fabricated data in over 50 fraudulent papers before being detected.
  • a problem know as publication bias or the file drawer problem.
  • The smaller the effect size, the less likely the findings are to be true.
  • The greater the number and the lesser the selection of tested relationships, the less likely the findings are to be true.
  • For scientists, the discussion over how to resolve the problem is rapidly heating up with calls for big changes to how researchers register, conduct, and publish research and a growing chorus from hundreds of global scientific organizations demanding that all clinical trials are published.
  •  
    As we learned in TOK, science is full of uncertainties. And in this article, the author suggests that even the publication of science paper is full of flaws. But the general population often cited science source that's in support of them. However, science findings are full of faults and the possibility is very high for the scientists to make a false claim. Sometimes, not the errors in experiments, but the fabrication of data lead to false scientific papers. And also, there are a lot of patterns behind the publication of false scientific papers.
Javier E

Is Amazon Creating a Cultural Monopoly? - The New Yorker - 0 views

  • “We are not experts in antitrust law, and this letter is not a legal brief. But we are authors with a deep, collective experience in this field, and we agree with the authorities in economics and law who have asserted that Amazon’s dominant position makes it a monopoly as a seller of books and a monopsony as a buyer of books.” (A monopoly is a company that has extraordinary control over supply as a seller of goods to consumers; a monopsony has extraordinary control over suppliers as a buyer of their goods.)
  • a highly unorthodox argument: that, even though Amazon’s activities tend to reduce book prices, which is considered good for consumers, they ultimately hurt consumers
  • U.S. courts evaluate antitrust issues very differently, nowadays, than they did a hundred years ago, just after antitrust laws were established to keep big corporations from abusing their power. Back then, judges tended to be largely concerned with protecting suppliers from being squeezed by retailers, which meant that, if a corporation exercised monopoly power to push prices down, hurting suppliers, the company could easily lose an antitrust case. But by the nineteen-eighties, the judiciary’s focus had shifted to protecting consumers, leading courts to become more prone to ruling in favor of the corporation, on the grounds that lower prices are good for consumers.
  • ...7 more annotations...
  • specific argument—that Amazon’s actions are bad for consumers because they make our world less intellectually active and diverse—is unorthodox in its resort to cultural and artistic grounds. But it can be read as the inverse of a case like Leegin v. PSKS: that lower prices for worse products could be bad for consumers—and perhaps constitute an antitrust violation.
  • if higher prices corresponded with better products, that could be good for consumers—and not necessarily an antitrust violation.
  • Their argument is this: Amazon has used its market power both to influence which books get attention (by featuring them more prominently on its Web site, a practice I’ve also written about) and, in some cases, to drive prices lower. These practices, the authors argue, squeeze publishers, which makes them more risk-averse in deciding which books to publish. As a result, they claim, publishers have been “dropping some midlist authors and not publishing certain riskier books, effectively silencing many voices.” And this is bad not only for the non-famous writers who go unpublished, but for their would-be readers, who are denied the ability to hear those voices.
  • While it may be attractive, on a philosophical level, to argue that Amazon is bad for us because it makes our culture poorer, measuring that effect would be difficult, if not impossible. How would one go about valuing an unpublished masterpiece by an unknown author? This is further complicated by the fact that Amazon makes it easy for authors to self-publish and have their work be seen, without having to go through such traditional gatekeepers as agents and publishers; Amazon might argue that this allows for more free flow of information and ideas
  • Furthermore, U.S. law is concerned with diversity in media, Crane said, but that tends to be regulated through the Federal Communications Commission, not the Justice Department.
  • it’s quite possible the Justice Department will read the Authors United letter and dismiss it as uninformed. But even if that happens, Preston said, it will have been worthwhile for the writers to have made their case.
  • Authors United’s larger mission, he told me, was this: “We hope to show the public that getting products faster and cheaper isn’t necessarily the greatest good. It comes at a human cost.”
Javier E

Facebook's Other Critics: Its Viral Stars - The New York Times - 0 views

  • In 2015, the social network began testing a revenue-sharing program with a limited group of creators, and last November, it rolled out Facebook Creator, a special app designed for professional users. Recently, the social network announced that it was testing some additional tools for creators, including a way for users to purchase monthly subscriptions to their favorite creators’ pages.But some of these features are still not widely available, and many influencers say that Facebook’s charm campaign amounts to too little, too late.
  • “It feels like they’ve pulled the biggest bait-and-switch of all time,” said Dan Shaba, a co-founder of The Pun Guys, a Facebook page with 1.2 million followers. “They’ve been promising monetization from the moment we got in.”Mr. Hamilton, he of the hot-pepper thong video, said, “I did 1.8 billion views last year. I made no money from Facebook. Not even a dollar.”
  • While waiting for Facebook to invite them into a revenue-sharing program, some influencers struck deals with viral publishers such as Diply and LittleThings, which paid the creators to share links on their pages. Those publishers paid top influencers around $500 per link, often with multiple links being posted per day, according to a person who reached such deals.
  • ...1 more annotation...
  • In January, Facebook threw a wrench into that media economy by changing its branded content policy to prohibit creators from accepting money for such link-sharing deals, and re-engineering its News Feed algorithms. Traffic to many viral publishers plummeted overnight. LittleThings, a female-focused digital publisher that had amassed more than 12 million Facebook followers, announced that it was shutting down and blamed Facebook’s News Feed changes for cratering its organic traffic.
Javier E

Doubts about Johns Hopkins research have gone unanswered, scientist says - The Washington Post - 0 views

  • Over and over, Daniel Yuan, a medical doctor and statistician, couldn’t understand the results coming out of the lab, a prestigious facility at Johns Hopkins Medical School funded by millions from the National Institutes of Health.He raised questions with the lab’s director. He reran the calculations on his own. He looked askance at the articles arising from the research, which were published in distinguished journals. He told his colleagues: This doesn’t make sense.“At first, it was like, ‘Okay — but I don’t really see it,’ ” Yuan recalled. “Then it started to smell bad.”
  • The passions of scientific debate are probably not much different from those that drive achievement in other fields, so a tragic, even deadly dispute might not be surprising.But science, creeping ahead experiment by experiment, paper by paper, depends also on institutions investigating errors and correcting them if need be, especially if they are made in its most respected journals.If the apparent suicide and Yuan’s detailed complaints provoked second thoughts about the Nature paper, though, there were scant signs of it.The journal initially showed interest in publishing Yuan’s criticism and told him that a correction was “probably” going to be written, according to e-mail rec­ords. That was almost six months ago. The paper has not been corrected.The university had already fired Yuan in December 2011, after 10 years at the lab. He had been raising questions about the research for years. He was escorted from his desk by two security guards.
  • Fang said retractions may be rising because it is simply easier to cheat in an era of digital images, which can be easily manipulated. But he said the increase is caused at least in part by the growing competition for publication and for NIH grant money.He noted that in the 1960s, about two out of three NIH grant requests were funded; today, the success rate for applicants for research funding is about one in five. At the same time, getting work published in the most esteemed journals, such as Nature, has become a “fetish” for some scientists, Fang said.
  • ...3 more annotations...
  • Last year, research published in the Proceedings of the National Academy of Sciences found that the percentage of scientific articles retracted because of fraud had increased tenfold since 1975. The same analysis reviewed more than 2,000 retracted biomedical papers and found that 67 percent of the retractions were attributable to misconduct, mainly fraud or suspected fraud.
  • many observers note that universities and journals, while sometimes agreeable to admitting small mistakes, are at times loath to reveal that the essence of published work was simply wrong.“The reader of scientific information is at the mercy of the scientific institution to investigate or not,” said Adam Marcus, who with Ivan Oransky founded the blog Retraction Watch in 2010. In this case, Marcus said, “if Hopkins doesn’t want to move, we may not find out what is happening for two or three years.”
  • The trouble is that a delayed response — or none at all — leaves other scientists to build upon shaky work. Fang said he has talked to researchers who have lost months by relying on results that proved impossible to reproduce.Moreover, as Marcus and Oransky have noted, much of the research is funded by taxpayers. Yet when retractions are done, they are done quietly and “live in obscurity,” meaning taxpayers are unlikely to find out that their money may have been wasted.
Javier E

Deeper Ties to Corporate Cash for Doubtful Climate Researcher - NYTimes.com - 1 views

  • For years, politicians wanting to block legislation on climate change have bolstered their arguments by pointing to the work of a handful of scientists who claim that greenhouse gases pose little risk to humanity.
  • One of the names they invoke most often is Wei-Hock Soon, known as Willie, a scientist at the Harvard-Smithsonian Center for Astrophysics who claims that variations in the sun’s energy can largely explain recent global warming.
  • He has accepted more than $1.2 million in money from the fossil-fuel industry over the last decade while failing to disclose that conflict of interest in most of his scientific papers. At least 11 papers he has published since 2008 omitted such a disclosure, and in at least eight of those cases, he appears to have violated ethical guidelines of the journals that published his work.
  • ...11 more annotations...
  • Historians and sociologists of science say that since the tobacco wars of the 1960s, corporations trying to block legislation that hurts their interests have employed a strategy of creating the appearance of scientific doubt, usually with the help of ostensibly independent researchers who accept industry funding.
  • “The whole doubt-mongering strategy relies on creating the impression of scientific debate,” said Naomi Oreskes, a historian of science at Harvard University and the co-author of “Merchants of Doubt,” a book about such campaigns. “Willie Soon is playing a role in a certain kind of political theater.”
  • Environmentalists have long questioned Dr. Soon’s work, and his acceptance of funding from the fossil-fuel industry was previously known. But the full extent of the links was not; the documents show that corporate contributions were tied to specific papers and were not disclosed, as required by modern standards of publishing.
  • “What it shows is the continuation of a long-term campaign by specific fossil-fuel companies and interests to undermine the scientific consensus on climate change,” said Kert Davies, executive director of the Climate Investigations Center, a group funded by foundations seeking to limit the risks of climate change.
  • As the oil-industry contributions fell, Dr. Soon started receiving hundreds of thousands of dollars through DonorsTrust, an organization based in Alexandria, Va., that accepts money from donors who wish to remain anonymous, then funnels it to various conservative causes.
  • Though often described on conservative news programs as a “Harvard astrophysicist,” Dr. Soon is not an astrophysicist and has never been employed by Harvard. He is a part-time employee of the Smithsonian Institution with a doctoral degree in aerospace engineering. He has received little federal research money over the past decade and is thus responsible for bringing in his own funds, including his salary.
  • Though he has little formal training in climatology, Dr. Soon has for years published papers trying to show that variations in the sun’s energy can explain most recent global warming. His thesis is that human activity has played a relatively small role in causing climate change.
  • Many experts in the field say that Dr. Soon uses out-of-date data, publishes spurious correlations between solar output and climate indicators, and does not take account of the evidence implicating emissions from human behavior in climate change.
  • Gavin A. Schmidt, head of the Goddard Institute for Space Studies in Manhattan, a NASA division that studies climate change, said that the sun had probably accounted for no more than 10 percent of recent global warming and that greenhouse gases produced by human activity explained most of it.“The science that Willie Soon does is almost pointless,” Dr. Schmidt said.
  • Dr. Soon has found a warm welcome among politicians in Washington and state capitals who try to block climate action. United States Senator James M. Inhofe, an Oklahoma Republican who claims that climate change is a global scientific hoax, has repeatedly cited Dr. Soon’s work over the years.
  • Dr. Oreskes, the Harvard science historian, said that academic institutions and scientific journals had been too lax in recent decades in ferreting out dubious research created to serve a corporate agenda.
charlottedonoho

How have changes to publishing affected scientists? | Julie McDougall-Waters | Science | The Guardian - 0 views

  • That was the purpose of a recent oral history event at the Royal Society, involving four senior scientists who began their careers in the 1960s and 1970s. Rather than simply reminiscing, they were asked to recall their publishing experiences in scientific periodicals over the last fifty years. How have things changed since they published their first paper?
  • It became clear that the hierarchy of journals has changed over the last fifty years, and the pressure to publish in those considered to have the highest impact has increased considerably, partly a result of the increased volume of data being produced and the need for readers to filter relevant information from the copious amounts of less pertinent stuff available.
  • What have also changed are the technologies available to write a paper. Frith related the process she went through in writing her first paper: “I wrote my papers by long hand and then typed them myself.” Writing a biological paper before computers is one thing, but Ashmore remembered the problems of producing mathematical formulae in a typed manuscript, explaining that “you wrote the paper and probably took it along to somebody to be typed… And then it came back with spaces where you had to write in the equations.”
  • ...2 more annotations...
  • Another change that interested the panellists was the increased number of collaborative and multiple authored papers now submitted to journals, which led them to think about the ethics of acknowledgement. In Meurig Thomas’s view the author is, simply, “the person that primarily thinks about the experiment, plans it, and writes it. I can sleep more comfortably at night this way. If I claim to be a senior author, I have to write it and I have to concoct what the experiment was, and defend it.” Chaloner suggested that authorship has grown “because of the pressure for people to have publications in their names”, with an “agreement to let you come onto this paper and I’ll get on yours next time”. Frith referred to this as “gaming”.
  • Despite all of the technological developments in the last fifty years, there has been no quick or easy response to questions over refereeing, and the event ended with the feeling that although there is no doubt technology has transformed the way science is communicated, its effect has not invariably simplified the process.
Javier E

Rise in Scientific Journal Retractions Prompts Calls for Reform - NYTimes.com - 1 views

  • before long they reached a troubling conclusion: not only that retractions were rising at an alarming rate, but that retractions were just a manifestation of a much more profound problem — “a symptom of a dysfunctional scientific climate,” as Dr. Fang put it.
  • he feared that science had turned into a winner-take-all game with perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct.
  • Members of the committee agreed with their assessment. “I think this is really coming to a head,” said Dr. Roberta B. Ness, dean of the University of Texas School of Public Health. And Dr. David Korn of Harvard Medical School agreed that “there are problems all through the system.”
  • ...20 more annotations...
  • science has changed in some worrying ways in recent decades — especially biomedical research, which consumes a larger and larger share of government science spending.
  • the journal Nature reported that published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent.
  • because journals are now online, bad papers are simply reaching a wider audience, making it more likely that errors will be spotted.
  • The National Institutes of Health accepts a much lower percentage of grant applications today than in earlier decades. At the same time, many universities expect scientists to draw an increasing part of their salaries from grants, and these pressures have influenced how scientists are promoted.
  • Dr. Fang and Dr. Casadevall looked at the rate of retractions in 17 journals from 2001 to 2010 and compared it with the journals’ “impact factor,” a score based on how often their papers are cited by scientists. The higher a journal’s impact factor, the two editors found, the higher its retraction rate.
  • Each year, every laboratory produces a new crop of Ph.D.’s, who must compete for a small number of jobs, and the competition is getting fiercer. In 1973, more than half of biologists had a tenure-track job within six years of getting a Ph.D. By 2006 the figure was down to 15 percent.
  • Yet labs continue to have an incentive to take on lots of graduate students to produce more research. “I refer to it as a pyramid scheme,
  • In such an environment, a high-profile paper can mean the difference between a career in science or leaving the field. “It’s becoming the price of admission,”
  • To survive professionally, scientists feel the need to publish as many papers as possible, and to get them into high-profile journals. And sometimes they cut corners or even commit misconduct to get ther
  • “What people do is they count papers, and they look at the prestige of the journal in which the research is published, and they see how may grant dollars scientists have, and if they don’t have funding, they don’t get promoted,” Dr. Fang said. “It’s not about the quality of the research.”
  • Dr. Ness likens scientists today to small-business owners, rather than people trying to satisfy their curiosity about how the world works. “You’re marketing and selling to other scientists,” she said. “To the degree you can market and sell your products better, you’re creating the revenue stream to fund your enterprise.”
  • Universities want to attract successful scientists, and so they have erected a glut of science buildings, Dr. Stephan said. Some universities have gone into debt, betting that the flow of grant money will eventually pay off the loans.
  • “You can’t afford to fail, to have your hypothesis disproven,” Dr. Fang said. “It’s a small minority of scientists who engage in frank misconduct. It’s a much more insidious thing that you feel compelled to put the best face on everything.”
  • , Dr. Stephan points out that a number of countries — including China, South Korea and Turkey — now offer cash rewards to scientists who get papers into high-profile journals.
  • To change the system, Dr. Fang and Dr. Casadevall say, start by giving graduate students a better understanding of science’s ground rules — what Dr. Casadevall calls “the science of how you know what you know.”
  • They would also move away from the winner-take-all system, in which grants are concentrated among a small fraction of scientists. One way to do that may be to put a cap on the grants any one lab can receive.
  • Such a shift would require scientists to surrender some of their most cherished practices — the priority rule, for example, which gives all the credit for a scientific discovery to whoever publishes results first.
  • To ease such cutthroat competition, the two editors would also change the rules for scientific prizes and would have universities take collaboration into account when they decide on promotions.
  • Even scientists who are sympathetic to the idea of fundamental change are skeptical that it will happen any time soon. “I don’t think they have much chance of changing what they’re talking about,” said Dr. Korn, of Harvard.
  • “When our generation goes away, where is the new generation going to be?” he asked. “All the scientists I know are so anxious about their funding that they don’t make inspiring role models. I heard it from my own kids, who went into art and music respectively. They said, ‘You know, we see you, and you don’t look very happy.’ ”
Javier E

The French Do Buy Books. Real Books. - NYTimes.com - 0 views

  • For a few bucks off and the pleasure of shopping from bed, have we handed over a precious natural resource — our nation’s books — to an ambitious billionaire with an engineering degree?
  • France, meanwhile, has just unanimously passed a so-called anti-Amazon law, which says online sellers can’t offer free shipping on discounted books. (“It will be either cheese or dessert, not both at once,” a French commentator explained.)
  • Amazon has a 10 or 12 percent share of new book sales in France. Amazon reportedly handles 70 percent of the country’s online book sales, but just 18 percent of books are sold online.
  • ...8 more annotations...
  • no seller can offer more than 5 percent off the cover price of new books. That means a book costs more or less the same wherever you buy it in France, even online. The Lang law was designed to make sure France continues to have lots of different books, publishers and booksellers.
  • Readers say they trust books far more than any other medium, including newspapers and TV.
  • Six of the world’s 10 biggest book-selling countries — Germany, Japan, France, Italy, Spain and South Korea — have versions of fixed book prices.
  • What underlies France’s book laws isn’t just an economic position — it’s also a worldview. Quite simply, the French treat books as special. Some 70 percent of French people said they read at least one book last year; the average among French readers was 15 books.
  • In Britain, which abandoned its own fixed-price system in the 1990s, there are fewer than 1,000 independent bookstores left. A third closed in the past nine years, as supermarkets and Amazon discounted some books by more than 50 percent.
  • The French government classifies books as an “essential good,” along with electricity, bread and water.
  • None of this is taken for granted. People here have thought for centuries about what makes a book industry vibrant, and are watching developments in Britain and America as cautionary tales. “We don’t sell potatoes,” says Mr. Moni. “There are also ideas in books. That’s what’s dangerous. Because the day that you have a large seller that sells 80 percent of books, he’s the one who will decide what’s published, or what won’t be published.
  • “When your computer dies, you throw it away,” says Mr. Montagne of the publishers’ association. “But you’ll remember a book 20 years later. You’ve deeply entered into a story that’s not your own. It’s forged who you are. You’ll only see later how much it has affected you. You don’t keep all books, but it’s not a market like others. The contents of a bookcase can define who you are.”
charlottedonoho

Who's to blame when fake science gets published? - 1 views

  • The now-discredited study got headlines because it offered hope. It seemed to prove that our sense of empathy, our basic humanity, could overcome prejudice and bridge seemingly irreconcilable differences. It was heartwarming, and it was utter bunkum. The good news is that this particular case of scientific fraud isn't going to do much damage to anyone but the people who concocted and published the study. The bad news is that the alleged deception is a symptom of a weakness at the heart of the scientific establishment.
  • When it was published in Science magazine last December, the research attracted academic as well as media attention; it seemed to provide solid evidence that increasing contact between minority and majority groups could reduce prejudice.
  • But in May, other researchers tried to reproduce the study using the same methods, and failed. Upon closer examination, they uncovered a number of devastating "irregularities" - statistical quirks and troubling patterns - that strongly implied that the whole LaCour/Green study was based upon made-up data.
  • ...6 more annotations...
  • The data hit the fan, at which point Green distanced himself from the survey and called for the Science article to be retracted. The professor even told Retraction Watch, the website that broke the story, that all he'd really done was help LaCour write up the findings.
  • Science magazine didn't shoulder any blame, either. In a statement, editor in chief Marcia McNutt said the magazine was essentially helpless against the depredations of a clever hoaxer: "No peer review process is perfect, and in fact it is very difficult for peer reviewers to detect artful fraud."
  • This is, unfortunately, accurate. In a scientific collaboration, a smart grad student can pull the wool over his adviser's eyes - or vice versa. And if close collaborators aren't going to catch the problem, it's no surprise that outside reviewers dragooned into critiquing the research for a journal won't catch it either. A modern science article rests on a foundation of trust.
  • If the process can't catch such obvious fraud - a hoax the perpetrators probably thought wouldn't work - it's no wonder that so many scientists feel emboldened to sneak a plagiarised passage or two past the gatekeepers.
  • Major peer-review journals tend to accept big, surprising, headline-grabbing results when those are precisely the ones that are most likely to be wrong.
  • Despite the artful passing of the buck by LaCour's senior colleague and the editors of Science magazine, affairs like this are seldom truly the product of a single dishonest grad student. Scientific publishers and veteran scientists - even when they don't take an active part in deception - must recognise that they are ultimately responsible for the culture producing the steady drip-drip-drip of falsification, exaggeration and outright fabrication eroding the discipline they serve.
Javier E

Denying Genetics Isn't Shutting Down Racism, It's Fueling It - 0 views

  • For many on the academic and journalistic left, genetics are deemed largely irrelevant when it comes to humans. Our large brains and the societies we have constructed with them, many argue, swamp almost all genetic influences.
  • Humans, in this view, are the only species on Earth largely unaffected by recent (or ancient) evolution, the only species where, for example, the natural division of labor between male and female has no salience at all, the only species, in fact, where natural variations are almost entirely social constructions, subject to reinvention.
  • We are, in this worldview, alone on the planet, born as blank slates, to be written on solely by culture. All differences between men and women are a function of this social effect; as are all differences between the races. If, in the aggregate, any differences in outcome between groups emerge, it is entirely because of oppression, patriarchy, white supremacy, etc. And it is a matter of great urgency that we use whatever power we have to combat these inequalities.
  • ...21 more annotations...
  • Reich simply points out that this utopian fiction is in danger of collapse because it is not true and because genetic research is increasingly proving it untrue.
  • “You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work.” Which means to say that the differences could be (and actually are) substantial.
  • If you don’t establish a reasonable forum for debate on this, Reich argues, if you don’t establish the principle is that we do not have to be afraid of any of this, it will be monopolized by truly unreasonable and indeed dangerous racists. And those racists will have the added prestige for their followers of revealing forbidden knowledge.
  • so there are two arguments against the suppression of this truth and the stigmatization of its defenders: that it’s intellectually dishonest and politically counterproductive.
  • Klein seems to back a truly extreme position: that only the environment affects IQ scores, and genes play no part in group differences in human intelligence. To this end, he cites the “Flynn effect,” which does indeed show that IQ levels have increased over the years, and are environmentally malleable up to a point. In other words, culture, politics, and economics do matter.
  • But Klein does not address the crucial point that even with increases in IQ across all races over time, the racial gap is still frustratingly persistent, that, past a certain level, IQ measurements have actually begun to fall in many developed nations, and that Flynn himself acknowledges that the effect does not account for other genetic influences on intelligence.
  • In an email exchange with me, in which I sought clarification, Klein stopped short of denying genetic influences altogether, but argued that, given rising levels of IQ, and given how brutal the history of racism against African-Americans has been, we should nonetheless assume “right now” that genes are irrelevant.
  • My own brilliant conclusion: Group differences in IQ are indeed explicable through both environmental and genetic factors and we don’t yet know quite what the balance is.
  • if we assume genetics play no role, and base our policy prescriptions on something untrue, we are likely to overshoot and over-promise in social policy, and see our rhetoric on race become ever more extreme and divisive.
  • We may even embrace racial discrimination, as in affirmative action, that fuels deeper divides. All of which, it seems to me, is happening — and actively hampering racial progress, as the left defines the most multiracial and multicultural society in human history as simply “white supremacy” unchanged since slavery; and as the right viscerally responds by embracing increasingly racist white identity politics.
  • A more nuanced understanding of race, genetics, and environment would temper this polarization, and allow for more unifying, practical efforts to improve equality of opportunity, while never guaranteeing or expecting equality of outcomes.
  • In some ways, this is just a replay of the broader liberal-conservative argument. Leftists tend to believe that all inequality is created; liberals tend to believe we can constantly improve the world in every generation, forever perfecting our societies.
  • Rightists believe that human nature is utterly unchanging; conservatives tend to see the world as less plastic than liberals, and attempts to remake it wholesale dangerous and often counterproductive.
  • I think the genius of the West lies in having all these strands in our politics competing with one another.
  • Where I do draw the line is the attempt to smear legitimate conservative ideas and serious scientific arguments as the equivalent of peddling white supremacy and bigotry. And Klein actively contributes to that stigmatization and demonization. He calls the science of this “race science” as if it were some kind of illicit and illegitimate activity, rather than simply “science.”
  • He goes on to equate the work of these scientists with the “most ancient justification for bigotry and racial inequality.” He even uses racism to dismiss Murray and Harris: they are, after all, “two white men.
  • He still refuses to believe that Murray’s views on this are perfectly within the academic mainstream in studies of intelligence, as they were in 1994.
  • Klein cannot seem to hold the following two thoughts in his brain at the same time: that past racism and sexism are foul, disgusting, and have wrought enormous damage and pain and that unavoidable natural differences between races and genders can still exist.
  • , it matters that we establish a liberalism that is immune to such genetic revelations, that can strive for equality of opportunity, and can affirm the moral and civic equality of every human being on the planet.
  • Liberalism has never promised equality of outcomes, merely equality of rights. It’s a procedural political philosophy rooted in means, not a substantive one justified by achieving certain ends.
  • liberalism is integral to our future as a free society — and it should not falsely be made contingent on something that can be empirically disproven. It must allow for the truth of genetics to be embraced, while drawing the firmest of lines against any moral or political abuse of it
Javier E

Accelerationism: how a fringe philosophy predicted the future we live in | World news | The Guardian - 1 views

  • Roger Zelazny, published his third novel. In many ways, Lord of Light was of its time, shaggy with imported Hindu mythology and cosmic dialogue. Yet there were also glints of something more forward-looking and political.
  • accelerationism has gradually solidified from a fictional device into an actual intellectual movement: a new way of thinking about the contemporary world and its potential.
  • Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative.
  • ...31 more annotations...
  • Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled.
  • Accelerationism, therefore, goes against conservatism, traditional socialism, social democracy, environmentalism, protectionism, populism, nationalism, localism and all the other ideologies that have sought to moderate or reverse the already hugely disruptive, seemingly runaway pace of change in the modern world
  • Robin Mackay and Armen Avanessian in their introduction to #Accelerate: The Accelerationist Reader, a sometimes baffling, sometimes exhilarating book, published in 2014, which remains the only proper guide to the movement in existence.
  • “We all live in an operating system set up by the accelerating triad of war, capitalism and emergent AI,” says Steve Goodman, a British accelerationist
  • A century ago, the writers and artists of the Italian futurist movement fell in love with the machines of the industrial era and their apparent ability to invigorate society. Many futurists followed this fascination into war-mongering and fascism.
  • One of the central figures of accelerationism is the British philosopher Nick Land, who taught at Warwick University in the 1990s
  • Land has published prolifically on the internet, not always under his own name, about the supposed obsolescence of western democracy; he has also written approvingly about “human biodiversity” and “capitalistic human sorting” – the pseudoscientific idea, currently popular on the far right, that different races “naturally” fare differently in the modern world; and about the supposedly inevitable “disintegration of the human species” when artificial intelligence improves sufficiently.
  • In our politically febrile times, the impatient, intemperate, possibly revolutionary ideas of accelerationism feel relevant, or at least intriguing, as never before. Noys says: “Accelerationists always seem to have an answer. If capitalism is going fast, they say it needs to go faster. If capitalism hits a bump in the road, and slows down” – as it has since the 2008 financial crisis – “they say it needs to be kickstarted.”
  • On alt-right blogs, Land in particular has become a name to conjure with. Commenters have excitedly noted the connections between some of his ideas and the thinking of both the libertarian Silicon Valley billionaire Peter Thiel and Trump’s iconoclastic strategist Steve Bannon.
  • “In Silicon Valley,” says Fred Turner, a leading historian of America’s digital industries, “accelerationism is part of a whole movement which is saying, we don’t need [conventional] politics any more, we can get rid of ‘left’ and ‘right’, if we just get technology right. Accelerationism also fits with how electronic devices are marketed – the promise that, finally, they will help us leave the material world, all the mess of the physical, far behind.”
  • In 1972, the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari published Anti-Oedipus. It was a restless, sprawling, appealingly ambiguous book, which suggested that, rather than simply oppose capitalism, the left should acknowledge its ability to liberate as well as oppress people, and should seek to strengthen these anarchic tendencies, “to go still further … in the movement of the market … to ‘accelerate the process’”.
  • By the early 90s Land had distilled his reading, which included Deleuze and Guattari and Lyotard, into a set of ideas and a writing style that, to his students at least, were visionary and thrillingly dangerous. Land wrote in 1992 that capitalism had never been properly unleashed, but instead had always been held back by politics, “the last great sentimental indulgence of mankind”. He dismissed Europe as a sclerotic, increasingly marginal place, “the racial trash-can of Asia”. And he saw civilisation everywhere accelerating towards an apocalypse: “Disorder must increase... Any [human] organisation is ... a mere ... detour in the inexorable death-flow.”
  • With the internet becoming part of everyday life for the first time, and capitalism seemingly triumphant after the collapse of communism in 1989, a belief that the future would be almost entirely shaped by computers and globalisation – the accelerated “movement of the market” that Deleuze and Guattari had called for two decades earlier – spread across British and American academia and politics during the 90s. The Warwick accelerationists were in the vanguard.
  • In the US, confident, rainbow-coloured magazines such as Wired promoted what became known as “the Californian ideology”: the optimistic claim that human potential would be unlocked everywhere by digital technology. In Britain, this optimism influenced New Labour
  • At Warwick, however, the prophecies were darker. “One of our motives,” says Plant, “was precisely to undermine the cheery utopianism of the 90s, much of which seemed very conservative” – an old-fashioned male desire for salvation through gadgets, in her view.
  • The CCRU gang formed reading groups and set up conferences and journals. They squeezed into the narrow CCRU room in the philosophy department and gave each other impromptu seminars.
  • The main result of the CCRU’s frantic, promiscuous research was a conveyor belt of cryptic articles, crammed with invented terms, sometimes speculative to the point of being fiction.
  • The Warwick accelerationists saw themselves as participants, not traditional academic observers
  • K-punk was written by Mark Fisher, formerly of the CCRU. The blog retained some Warwick traits, such as quoting reverently from Deleuze and Guattari, but it gradually shed the CCRU’s aggressive rhetoric and pro-capitalist politics for a more forgiving, more left-leaning take on modernity. Fisher increasingly felt that capitalism was a disappointment to accelerationists, with its cautious, entrenched corporations and endless cycles of essentially the same products. But he was also impatient with the left, which he thought was ignoring new technology
  • lex Williams, co-wrote a Manifesto for an Accelerationist Politics. “Capitalism has begun to constrain the productive forces of technology,” they wrote. “[Our version of] accelerationism is the basic belief that these capacities can and should be let loose … repurposed towards common ends … towards an alternative modernity.”
  • What that “alternative modernity” might be was barely, but seductively, sketched out, with fleeting references to reduced working hours, to technology being used to reduce social conflict rather than exacerbate it, and to humanity moving “beyond the limitations of the earth and our own immediate bodily forms”. On politics and philosophy blogs from Britain to the US and Italy, the notion spread that Srnicek and Williams had founded a new political philosophy: “left accelerationism”.
  • Two years later, in 2015, they expanded the manifesto into a slightly more concrete book, Inventing the Future. It argued for an economy based as far as possible on automation, with the jobs, working hours and wages lost replaced by a universal basic income. The book attracted more attention than a speculative leftwing work had for years, with interest and praise from intellectually curious leftists
  • Even the thinking of the arch-accelerationist Nick Land, who is 55 now, may be slowing down. Since 2013, he has become a guru for the US-based far-right movement neoreaction, or NRx as it often calls itself. Neoreactionaries believe in the replacement of modern nation-states, democracy and government bureaucracies by authoritarian city states, which on neoreaction blogs sound as much like idealised medieval kingdoms as they do modern enclaves such as Singapore.
  • Land argues now that neoreaction, like Trump and Brexit, is something that accelerationists should support, in order to hasten the end of the status quo.
  • In 1970, the American writer Alvin Toffler, an exponent of accelerationism’s more playful intellectual cousin, futurology, published Future Shock, a book about the possibilities and dangers of new technology. Toffler predicted the imminent arrival of artificial intelligence, cryonics, cloning and robots working behind airline check-in desks
  • Land left Britain. He moved to Taiwan “early in the new millennium”, he told me, then to Shanghai “a couple of years later”. He still lives there now.
  • In a 2004 article for the Shanghai Star, an English-language paper, he described the modern Chinese fusion of Marxism and capitalism as “the greatest political engine of social and economic development the world has ever known”
  • Once he lived there, Land told me, he realised that “to a massive degree” China was already an accelerationist society: fixated by the future and changing at speed. Presented with the sweeping projects of the Chinese state, his previous, libertarian contempt for the capabilities of governments fell away
  • Without a dynamic capitalism to feed off, as Deleuze and Guattari had in the early 70s, and the Warwick philosophers had in the 90s, it may be that accelerationism just races up blind alleys. In his 2014 book about the movement, Malign Velocities, Benjamin Noys accuses it of offering “false” solutions to current technological and economic dilemmas. With accelerationism, he writes, a breakthrough to a better future is “always promised and always just out of reach”.
  • “The pace of change accelerates,” concluded a documentary version of the book, with a slightly hammy voiceover by Orson Welles. “We are living through one of the greatest revolutions in history – the birth of a new civilisation.”
  • Shortly afterwards, the 1973 oil crisis struck. World capitalism did not accelerate again for almost a decade. For much of the “new civilisation” Toffler promised, we are still waiting
ilanaprincilus06

US cities are losing 36 million trees a year. Here's why it matters and how you can stop it - 0 views

  • Trees can lower summer daytime temperatures by as much as 10 degrees Fahrenheit, according to a recent study.
  • A study published last year by the US Forest Service found that we lost 36 million trees annually from urban and rural communities over a five-year period. That’s a 1% drop from 2009 to 2014.
  • “cities will become warmer, more polluted and generally more unhealthy for inhabitants,”
  • ...17 more annotations...
  • Trees act as water filters, taking in dirty surface water and absorbing nitrogen and phosphorus into the soil.
  • But the one reason for tree loss that humans can control is sensible development.
  • “Every time we put a road down, we put a building and we cut a tree or add a tree, it not only affects that site, it affects the region.”
  • The study placed a value on tree loss based on trees’ role in air pollution removal and energy conservation. The lost value amounted to $96 million a year.
  • Trees provide shade for homes, office buildings, parks and roadways, cooling surface temperatures.
  • Trees absorb carbon and remove pollutants from the atmosphere.
  • Trees reduce energy costs by $4 billion a year, according to Nowak’s study.
  • there are many reasons our tree canopy is declining, including hurricanes, tornadoes, fires, insects and disease.
  • Trees reduce flooding by absorbing water and reducing runoff into streams.
  • Trees can deflect sound
  • Trees absorb 96% of ultraviolet radiation, Nowak says.
  • Many studies have found connections between exposure to nature and better mental and physical health.
  • And studies have associated living near green areas with lower death rates.
  • Worldwide, forests provide for a huge diversity of animal life.
  • there’s a downside to trees too, such as pollen allergies or large falling branches in storms, “and people don’t like raking leaves.”
  • Urban forests especially need our help to replace fallen trees. Unlike rural areas, it is very difficult for trees to repopulate themselves in a city environment with so much pavement and asphalt.
  • Don’t remove old trees if it’s not necessary: Instead, try taking smaller actions like removing branches.
Javier E

Rebecca F Kuang rejects idea authors should not write about other races | Publishing | The Guardian - 0 views

  • The author of Babel and The Poppy War, Rebecca F Kuang, has said she finds the idea that authors should only write about characters of their own race “deeply frustrating and pretty illogical”.
  • the author, who was born in China but moved to the US when she was four, said that there is a “really weird kind of identity politics going on in American publishing”. She is “sympathetic” to an extent, as it is coming from “decades of frustration of seeing the same racist, uncritical, under-researched, shallow stereotypes”
  • The problem is, Kuang thinks, is that this has now “spiralled into this really strict and reductive understanding of race”. As a result, a movement that began as a call for more authentic stories about marginalised communities “gets flipped around and weaponised against the marginalised writers to pigeonhole them into telling only certain kinds of stories”.
  • ...2 more annotations...
  • Kuang was critical of those who believe that “the only people getting book deals right now are BIPOC [Black, Indigenous and people of colour] writers”, including Joyce Carol Oates, who tweeted last year that editors do not want to look at debuts by white male writers. Oates “really just needs to get off Twitter”, Kuang joked, but also said “if you just walk into a bookstore you know that’s not true.”
  • “We also know from industry reports year after year that the number of BIPOC authors being published hasn’t really budged since the 70s”, she added. “In fact, you can historically trace the years in which the number of Black authors being published in the US spiked to the years in which Toni Morrison was an acquiring editor, which is very depressing.
Javier E

The Age of Social Media Is Ending - The Atlantic - 0 views

  • Slowly and without fanfare, around the end of the aughts, social media took its place. The change was almost invisible, but it had enormous consequences. Instead of facilitating the modest use of existing connections—largely for offline life (to organize a birthday party, say)—social software turned those connections into a latent broadcast channel. All at once, billions of people saw themselves as celebrities, pundits, and tastemakers.
  • A global broadcast network where anyone can say anything to anyone else as often as possible, and where such people have come to think they deserve such a capacity, or even that withholding it amounts to censorship or suppression—that’s just a terrible idea from the outset. And it’s a terrible idea that is entirely and completely bound up with the concept of social media itself: systems erected and used exclusively to deliver an endless stream of content.
  • “social media,” a name so familiar that it has ceased to bear meaning. But two decades ago, that term didn’t exist
  • ...35 more annotations...
  • a “web 2.0” revolution in “user-generated content,” offering easy-to-use, easily adopted tools on websites and then mobile apps. They were built for creating and sharing “content,”
  • As the original name suggested, social networking involved connecting, not publishing. By connecting your personal network of trusted contacts (or “strong ties,” as sociologists call them) to others’ such networks (via “weak ties”), you could surface a larger network of trusted contacts
  • The whole idea of social networks was networking: building or deepening relationships, mostly with people you knew. How and why that deepening happened was largely left to the users to decide.
  • That changed when social networking became social media around 2009, between the introduction of the smartphone and the launch of Instagram. Instead of connection—forging latent ties to people and organizations we would mostly ignore—social media offered platforms through which people could publish content as widely as possible, well beyond their networks of immediate contacts.
  • Social media turned you, me, and everyone into broadcasters (if aspirational ones). The results have been disastrous but also highly pleasurable, not to mention massively profitable—a catastrophic combination.
  • A social network is an idle, inactive system—a Rolodex of contacts, a notebook of sales targets, a yearbook of possible soul mates. But social media is active—hyperactive, really—spewing material across those networks instead of leaving them alone until needed.
  • The authors propose social media as a system in which users participate in “information exchange.” The network, which had previously been used to establish and maintain relationships, becomes reinterpreted as a channel through which to broadcast.
  • The toxicity of social media makes it easy to forget how truly magical this innovation felt when it was new. From 2004 to 2009, you could join Facebook and everyone you’d ever known—including people you’d definitely lost track of—was right there, ready to connect or reconnect. The posts and photos I saw characterized my friends’ changing lives, not the conspiracy theories that their unhinged friends had shared with them
  • Twitter, which launched in 2006, was probably the first true social-media site, even if nobody called it that at the time. Instead of focusing on connecting people, the site amounted to a giant, asynchronous chat room for the world. Twitter was for talking to everyone—which is perhaps one of the reasons journalists have flocked to it
  • on Twitter, anything anybody posted could be seen instantly by anyone else. And furthermore, unlike posts on blogs or images on Flickr or videos on YouTube, tweets were short and low-effort, making it easy to post many of them a week or even a day.
  • When we look back at this moment, social media had already arrived in spirit if not by name. RSS readers offered a feed of blog posts to catch up on, complete with unread counts. MySpace fused music and chatter; YouTube did it with video (“Broadcast Yourself”)
  • soon enough, all social networks became social media first and foremost. When groups, pages, and the News Feed launched, Facebook began encouraging users to share content published by others in order to increase engagement on the service, rather than to provide updates to friends. LinkedIn launched a program to publish content across the platform, too. Twitter, already principally a publishing platform, added a dedicated “retweet” feature, making it far easier to spread content virally across user networks.
  • From being asked to review every product you buy to believing that every tweet or Instagram image warrants likes or comments or follows, social media produced a positively unhinged, sociopathic rendition of human sociality.
  • Other services arrived or evolved in this vein, among them Reddit, Snapchat, and WhatsApp, all far more popular than Twitter. Social networks, once latent routes for possible contact, became superhighways of constant content
  • Although you can connect the app to your contacts and follow specific users, on TikTok, you are more likely to simply plug into a continuous flow of video content that has oozed to the surface via algorithm.
  • In the social-networking era, the connections were essential, driving both content creation and consumption. But the social-media era seeks the thinnest, most soluble connections possible, just enough to allow the content to flow.
  • Facebook and all the rest enjoyed a massive rise in engagement and the associated data-driven advertising profits that the attention-driven content economy created. The same phenomenon also created the influencer economy, in which individual social-media users became valuable as channels for distributing marketing messages or product sponsorships by means of their posts’ real or imagined reach
  • “influencer” became an aspirational role, especially for young people for whom Instagram fame seemed more achievable than traditional celebrity—or perhaps employment of any kind.
  • social-media operators discovered that the more emotionally charged the content, the better it spread across its users’ networks. Polarizing, offensive, or just plain fraudulent information was optimized for distribution. By the time the platforms realized and the public revolted, it was too late to turn off these feedback loops.
  • The ensuing disaster was multipar
  • Rounding up friends or business contacts into a pen in your online profile for possible future use was never a healthy way to understand social relationships.
  • when social networking evolved into social media, user expectations escalated. Driven by venture capitalists’ expectations and then Wall Street’s demands, the tech companies—Google and Facebook and all the rest—became addicted to massive scale
  • Social media showed that everyone has the potential to reach a massive audience at low cost and high gain—and that potential gave many people the impression that they deserve such an audience.
  • On social media, everyone believes that anyone to whom they have access owes them an audience: a writer who posted a take, a celebrity who announced a project, a pretty girl just trying to live her life, that anon who said something afflictive
  • When network connections become activated for any reason or no reason, then every connection seems worthy of traversing.
  • people just aren’t meant to talk to one another this much. They shouldn’t have that much to say, they shouldn’t expect to receive such a large audience for that expression, and they shouldn’t suppose a right to comment or rejoinder for every thought or notion either.
  • This is also why journalists became so dependent on Twitter: It’s a constant stream of sources, events, and reactions—a reporting automat, not to mention an outbound vector for media tastemakers to make tastes.
  • That’s no surprise, I guess, given that the model was forged in the fires of Big Tech companies such as Facebook, where sociopathy is a design philosophy.
  • If change is possible, carrying it out will be difficult, because we have adapted our lives to conform to social media’s pleasures and torments. It’s seemingly as hard to give up on social media as it was to give up smoking en masse
  • Quitting that habit took decades of regulatory intervention, public-relations campaigning, social shaming, and aesthetic shifts. At a cultural level, we didn’t stop smoking just because the habit was unpleasant or uncool or even because it might kill us. We did so slowly and over time, by forcing social life to suffocate the practice. That process must now begin in earnest for social media.
  • Something may yet survive the fire that would burn it down: social networks, the services’ overlooked, molten core. It was never a terrible idea, at least, to use computers to connect to one another on occasion, for justified reasons, and in moderation
  • The problem came from doing so all the time, as a lifestyle, an aspiration, an obsession. The offer was always too good to be true, but it’s taken us two decades to realize the Faustian nature of the bargain.
  • when I first wrote about downscale, the ambition seemed necessary but impossible. It still feels unlikely—but perhaps newly plausible.
  • To win the soul of social life, we must learn to muzzle it again, across the globe, among billions of people. To speak less, to fewer people and less often–and for them to do the same to you, and everyone else as well
  • We cannot make social media good, because it is fundamentally bad, deep in its very structure. All we can do is hope that it withers away, and play our small part in helping abandon it.
Javier E

Dark social traffic in the mobile app era -- Fusion - 1 views

  • over the last two years, the Internet landscape has been changing. People use their phones differently from their computers, and that has made Facebook more dominant.
  • people spend about as much time in apps as they do on the desktop and mobile webs combined.
  • The takeaway is this: if you’re a media company, you are almost certainly underestimating your Facebook traffic. The only question is how much Facebook traffic you’re not counting.
  • ...11 more annotations...
  • it should be even more clear now: Facebook owns web media distribution.
  • The mobile web has exploded. This is due to the falling cost and rising quality of smartphones. Now, both Apple and Google have huge numbers of great apps, and people love them.
  • a good chunk of what we might have called dark social visits are actually Facebook mobile app visitors in disguise.
  • beginning last October, Facebook made changes in its algorithm that started pushing massive amounts of traffic to media publishers. In some cases, as at The Atlantic, where I last worked, our Facebook traffic went up triple-digit percentages. Facebook simultaneously also pushed users to like pages from media companies, which drove up the fan-counts at all kinds of media sites. If you see a page with a million followers, there is a 99 percent chance that it got a push from Facebook.
  • Chief among the non-gaming apps is Facebook. They’ve done a remarkable job building a mobile app that keeps people using it.
  • when people are going through their news feeds on the Facebook app and they click on a link, it’s as if someone cut and pasted that link into the browser, meaning that the Facebook app and the target website don’t do the normal handshaking that they do on the web. In the desktop scenario, the incoming visitor has a tout that runs ahead to the website and says, “Hey, I’m coming from Facebook.com.” In the mobile app scenario that communication, known as the referrer, does not happen.
  • Facebook—which every media publisher already knows owns them—actually has a much tighter grip on web traffic than anyone had thought. Which would make their big-footing among publishers that much more interesting. Because they certainly know how much traffic they’re sending to all your favorite websites, even if those websites themselves do not.
  • Whenever you go to a website, you take along a little profile called a “user agent.” It says what my operating system is and what kind of browser I use, along with some other information.
  • A story’s shareability is now largely determined by its shareability on Facebook, with all its attendant quirks and feedback loops. We’re all optimizing for Facebook now,
  • the social networks—by which I mostly mean Facebook—have begun to eat away at the roots of the old ways of sharing on non-commercial platforms.
  • what people like to do with their phones, en masse, is open up the Facebook app and thumb through their news feeds.
1 - 20 of 400 Next › Last »
Showing 20 items per page