Skip to main content

Home/ TOK Friends/ Group items tagged company

Rss Feed Group items tagged

Javier E

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
sissij

Google and Facebook Take Aim at Fake News Sites - The New York Times - 0 views

  • Over the last week, two of the world’s biggest internet companies have faced mounting criticism over how fake news on their sites may have influenced the presidential election’s outcome.
  • Hours later, Facebook, the social network, updated the language in its Facebook Audience Network policy, which already says it will not display ads in sites that show misleading or illegal content, to include fake news sites.
  • Google did not escape the glare, with critics saying the company gave too much prominence to false news stories.
  • ...2 more annotations...
  • Facebook has long spoken of how it helped influence and stoke democratic movements in places like the Middle East, and it tells its advertisers that it can help sway its users with ads.
  • It remains to be seen how effective Google’s new policy on fake news will be in practice. The policy will rely on a combination of automated and human reviews to help determine what is fake. Although satire sites like The Onion are not the target of the policy, it is not clear whether some of them, which often run fake news stories written for humorous effect, will be inadvertently affected by Google’s change.
  •  
    Company start to pay attention to the fake news on the social media. It reminded me of the government involvement in economics. Although internet should be a place free of speech, there are mounting amount of fake news and alternative facts now that the company need to regulate and make rules to restrict it. I think as long as there is human society, we need rule. In free markets, we also need government regulation to remain a balance. --Sissi (3/6/2017)
anonymous

This Is Your Brain on Junk Food: In 'Hooked,' Michael Moss Explores Addiction - The New... - 0 views

  • This Is Your Brain on Junk Food
  • Yet after writing the book, Mr. Moss was not convinced that processed foods could be addictive.
  • In a legal proceeding two decades ago, Michael Szymanczyk, the chief executive of the tobacco giant Philip Morris, was asked to define addiction.
  • ...30 more annotations...
  • “My definition of addiction is a repetitive behavior that some people find difficult to quit,”
  • Mr. Szymanczyk was speaking in the context of smoking. But a fascinating new book by Michael Moss, an investigative journalist and best-selling author, argues that the tobacco executive’s definition of addiction could apply to our relationship with another group of products that Philip Morris sold and manufactured for decades: highly processed foods.
  • In his new book, “Hooked,” Mr. Moss explores the science behind addiction and builds a case that food companies have painstakingly engineered processed foods to hijack the reward circuitry in our brains, causing us to overeat and helping to fuel a global epidemic of obesity and chronic disease.
  • Mr. Moss suggests that processed foods like cheeseburgers, potato chips and ice cream are not only addictive, but that they can be even more addictive than alcohol, tobacco and drugs.
  • In another cynical move, Mr. Moss writes, food companies beginning in the late 1970s started buying a slew of popular diet companies, allowing them to profit off our attempts to lose the weight we gained from eating their products.
  • Heinz, the processed food giant, bought Weight Watchers in 1978 for $72 million. Unilever, which sells Klondike bars and Ben & Jerry’s ice cream, paid $2.3 billion for SlimFast in 2000. Nestle, which makes chocolate bars and Hot Pockets, purchased Jenny Craig in 2006 for $600 million. And in 2010 the private equity firm that owns Cinnabon and Carvel ice cream purchased Atkins Nutritionals, the company that sells low-carb bars, shakes and snacks. Most of these diet brands were later sold to other parent companies.
  • “The food industry blocked us in the courts from filing lawsuits claiming addiction; they started controlling the science in problematic ways, and they took control of the diet industry,”
  • “I’ve been crawling through the underbelly of the processed food industry for 10 years and I continue to be stunned by the depths of the deviousness of their strategy to not just tap into our basic instincts, but to exploit our attempts to gain control of our habits.”
  • The book explained how companies formulate junk foods to achieve a “bliss point” that makes them irresistible and market those products using tactics borrowed from the tobacco industry.
  • In the 1980s, Philip Morris acquired Kraft and General Foods, making it the largest manufacturer of processed foods in the country, with products like Kool-Aid, Cocoa Pebbles, Capri Sun and Oreo cookies.
  • “I had tried to avoid the word addiction when I was writing ‘Salt Sugar Fat,’” he said. “I thought it was totally ludicrous. How anyone could compare Twinkies to crack cocaine was beyond me.”
  • Witness
  • But as he dug into the science that shows how processed foods affect the brain, he was swayed
  • One crucial element that influences the addictive nature of a substance and whether or not we consume it compulsively is how quickly it excites the brain.
  • The faster it hits our reward circuitry, the stronger its impact.
  • That is why smoking crack cocaine is more powerful than ingesting cocaine through the nose, and smoking cigarettes produces greater feelings of reward than wearing a nicotine patch
  • : Smoking reduces the time it takes for drugs to hit the brain.
  • But no addictive drug can fire up the reward circuitry in our brains as rapidly as our favorite foods, Mr. Moss writes. “The smoke from cigarettes takes 10 seconds to stir the brain, but a touch of sugar on the tongue will do so in a little more than a half second, or six hundred milliseconds, to be precise,
  • This puts the term “fast food” in a new light. “Measured in milliseconds, and the power to addict, nothing is faster than processed food in rousing the brain,” he added.
  • Mr. Moss explains that even people in the tobacco industry took note of the powerful lure of processed foods.
  • In “Hooked,” Michael Moss explores how no addictive drug can fire up the reward circuitry in our brains as rapidly as our favorite foods.
  • As litigation against tobacco companies gained ground in the 1990s, one of the industry’s defenses was that cigarettes were no more addictive than Twinkies.
  • It may have been on to something.
  • “Smoking was given an 8.5, nearly on par with heroin,” Mr. Moss writes. “But overeating, at 7.3, was not far behind, scoring higher than beer, tranquilizers and sleeping pills.
  • But processed foods are not tobacco, and many people, including some experts, dismiss the notion that they are addictive. Mr. Moss suggests that this reluctance is in part a result of misconceptions about what addiction entails.
  • For one, a substance does not have to hook everyone for it to be addictive.
  • Studies show that most people who drink or use cocaine do not become dependent
  • Nor does everyone who smokes or uses painkillers become addicted.
  • Mr. Moss said that people who struggle with processed food can try simple strategies to conquer routine cravings, like going for a walk, calling a friend or snacking on healthy alternatives like a handful of nuts. But for some people, more extreme measures may be necessary.
  • “It depends where you are on the spectrum,” he said. “I know people who can’t touch a grain of sugar without losing control. They would drive to the supermarket and by the time they got home their car would be littered with empty wrappers. For them, complete abstention is the solution.”
  •  
    Really interesting!! How food affects your brain:
runlai_jiang

Coca-Cola Plans Its First Alcoholic Drink - WSJ - 0 views

  • Coca-Cola Co.’s KO -0.68% Japan unit plans to introduce a fizzy alcoholic drink in the country, in what the company describes as the first alcoholic product it has ever developed.
  • Jorge Garduño, president of Coca-Cola’s Japan unit, said in an article posted on the company’s website that it is “going to experiment” with a canned drink that contains alcohol—a product category known as chu-hai in Japan.
  • Coca-Cola’s Japan unit has long sold many drinks that aren’t available elsewhere, including various teas and coffees and a laxative version of Coke called Coca-Cola Plus that was marketed as a health drink.
    • runlai_jiang
       
      Japan has always been innovative and creative in food industry
  • ...6 more annotations...
  • A spokeswoman for Coca-Cola Japan confirmed Wednesday that low-alcohol products are being “considered as an experimental approach.” She declined to give details of the proposed product, including when it might go on sale, and declined to make Mr. Garduño available for an interview.
  • Analysts and those in the drinks industry have long speculated that traditional divisions between alcoholic and nonalcoholic beverage companies will fade as more stores and websites sell both types of drinks.
  • The Coca-Cola spokesman declined to comment Wednesday on whether the company is exploring alcohol sales outside of Japan.
  • “While I don’t think this represents a global shift in company strategy, I do think we can expect Coca-Cola and its competitors to continue looking for new opportunities as traditional category lines and beverage occasions blur,
    • runlai_jiang
       
      It might be a new strategy for companies to break through tradition and absorb and explore wider business aspects
  • Japan has a highly competitive beverage market, where companies can introduce as many as 100 new drinks a year.
  • According to Suntory, the total market in Japan for canned ready-to-drink alcoholic beverages has grown for 10 consecutive years. It grew 9% in 2017 to the equivalent of 183 million 24-can cases.
peterconnelly

The Supreme Court vs. Social Media - The New York Times - 0 views

  • The Supreme Court handed social media companies a win on Tuesday by blocking, for now, a Texas law that would have banned large apps including Facebook and Twitter from weeding out messages based on the views they expressed.
  • Do sites like Facebook have a First Amendment right to allow some material and not others, or an obligation to distribute almost anything?
  • The First Amendment restricts government censorship, but it doesn’t apply to decisions made by businesses.
  • ...6 more annotations...
  • Conservative politicians have long complained that Facebook, Twitter, YouTube and other social media companies unfairly remove or demote some conservative viewpoints.
  • Associations of internet companies and some constitutional rights groups said that the Texas law violated the First Amendment because it allowed the state to tell private businesses what kinds of speech they could or could not distribute.
  • Texas countered that Facebook, Twitter and the like don’t have such First Amendment protections because they are more like old telegraphs, telephone companies and home internet providers.
  • A federal appeals court recently deemed unconstitutional a Florida law passed last year that similarly tried to restrict social media companies’ discretion over speech.
  • written by Justice Samuel Alito that said: “It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies.”
  • These cases force us to wrestle with a fundamental question about what kind of world we want to live in: Are Facebook, Twitter and YouTube so influential in our world that the government should restrain their decisions, or are they private companies that should have the freedom to set their own rules?
Javier E

Opinion | If You Want to Understand How Dangerous Elon Musk Is, Look Outside America - ... - 0 views

  • Twitter was an intoxicating window into my fascinating new assignment. Long suppressed groups found their voices and social media-driven revolutions began to unfold. Movements against corruption gained steam and brought real change. Outrage over a horrific gang rape in Delhi built a movement to fight an epidemic of sexual violence.
  • “What we didn’t realize — because we took it for granted for so long — is that most people spoke with a great deal of freedom, and completely unconscious freedom,” said Nilanjana Roy, a writer who was part of my initial group of Twitter friends in India. “You could criticize the government, debate certain religious practices. It seems unreal now.”
  • Soon enough, other kinds of underrepresented voices also started to appear on — and then dominate — the platform. As women, Muslims and people from lower castes spoke out, the inevitable backlash came. Supporters of the conservative opposition party, the Bharatiya Janata Party, and their right-wing religious allies felt that they had long been ignored by the mainstream press. Now they had the chance to grab the mic.
  • ...12 more annotations...
  • Viewed from the United States, these skirmishes over the unaccountable power of tech platforms seem like a central battleground of free speech. But the real threat in much of the world is not the policies of social media companies, but of governments.
  • The real question now is if Musk’s commitment to “free speech” extends beyond conservatives in America and to the billions of people in the Global South who rely on the internet for open communication.
  • ndia’s government had demanded that Twitter block tweets and accounts from a variety of journalists, activists and politicians. The company went to court, arguing that these demands went beyond the law and into censorship. Now Twitter’s potential new owner was casting doubt on whether the company should be defying government demands that muzzle freedom of expression.
  • The winning side will not be decided in Silicon Valley or Beijing, the two poles around which debate over free expression on the internet have largely orbited. It will be the actions of governments in capitals like Abuja, Jakarta, Ankara, Brasília and New Delhi.
  • Across the world, countries are putting in place frameworks that on their face seem designed to combat online abuse and misinformation but are largely used to stifle dissent or enable abuse of the enemies of those in power.
  • other governments are passing laws just to increase their power over speech online and to force companies to be an extension of state surveillance.” For example: requiring companies to house their servers locally rather than abroad, which can make them more vulnerable to government surveillance.
  • while much of the focus has been on countries like China, which overtly restricts access to huge swaths of the internet, the real war over the future of internet freedom is being waged in what she called “swing states,” big, fragile democracies like India.
  • it seems that this is actually what he believes. In April, he tweeted: “By ‘free speech’, I simply mean that which matches the law. I am against censorship that goes far beyond the law. If people want less free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people.”
  • Musk is either exceptionally naïve or willfully ignorant about the relationship between government power and free speech, especially in fragile democracies.
  • The combination of a rigid commitment to following national laws and a hands-off approach to content moderation is combustible and highly dangerous.
  • Independent journalism is increasingly under threat in India. Much of the mainstream press has been neutered by a mix of intimidation and conflicts of interests created by the sprawling conglomerates and powerful families that control much of Indian media
  • Twitter has historically fought against censorship. Whether that will continue under Musk seems very much a question. The Indian government has reasons to expect friendly treatment: Musk’s company Tesla has been trying to enter the Indian car market for some time, but in May it hit an impasse in negotiations with the government over tariffs and other issues
Javier E

Instagram's Algorithm Delivers Toxic Video Mix to Adults Who Follow Children - WSJ - 0 views

  • Instagram’s Reels video service is designed to show users streams of short videos on topics the system decides will interest them, such as sports, fashion or humor. 
  • The Meta Platforms META -1.04%decrease; red down pointing triangle-owned social app does the same thing for users its algorithm decides might have a prurient interest in children, testing by The Wall Street Journal showed.
  • The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
  • ...30 more annotations...
  • Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.
  • The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults
  • The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
  • The Canadian Centre for Child Protection, a child-protection group, separately ran similar tests on its own, with similar results.
  • Meta said the Journal’s tests produced a manufactured experience that doesn’t represent what billions of users see. The company declined to comment on why the algorithms compiled streams of separate videos showing children, sex and advertisements, but a spokesman said that in October it introduced new brand safety tools that give advertisers greater control over where their ads appear, and that Instagram either removes or reduces the prominence of four million videos suspected of violating its standards each month. 
  • The Journal reported in June that algorithms run by Meta, which owns both Facebook and Instagram, connect large communities of users interested in pedophilic content. The Meta spokesman said a task force set up after the Journal’s article has expanded its automated systems for detecting users who behave suspiciously, taking down tens of thousands of such accounts each month. The company also is participating in a new industry coalition to share signs of potential child exploitation.
  • “Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions,” said Samantha Stetson, a Meta vice president who handles relations with the advertising industry. She said the prevalence of inappropriate content on Instagram is low, and that the company invests heavily in reducing it.
  • Even before the 2020 launch of Reels, Meta employees understood that the product posed safety concerns, according to former employees.
  • Robbie McKay, a spokesman for Bumble, said it “would never intentionally advertise adjacent to inappropriate content,” and that the company is suspending its ads across Meta’s platforms.
  • Meta created Reels to compete with TikTok, the video-sharing platform owned by Beijing-based ByteDance. Both products feed users a nonstop succession of videos posted by others, and make money by inserting ads among them. Both companies’ algorithms show to a user videos the platforms calculate are most likely to keep that user engaged, based on his or her past viewing behavior
  • The Journal reporters set up the Instagram test accounts as adults on newly purchased devices and followed the gymnasts, cheerleaders and other young influencers. The tests showed that following only the young girls triggered Instagram to begin serving videos from accounts promoting adult sex content alongside ads for major consumer brands, such as one for Walmart that ran after a video of a woman exposing her crotch. 
  • When the test accounts then followed some users who followed those same young people’s accounts, they yielded even more disturbing recommendations. The platform served a mix of adult pornography and child-sexualizing material, such as a video of a clothed girl caressing her torso and another of a child pantomiming a sex act.
  • Experts on algorithmic recommendation systems said the Journal’s tests showed that while gymnastics might appear to be an innocuous topic, Meta’s behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
  • Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
  • Preventing the system from pushing noxious content to users interested in it, they said, requires significant changes to the recommendation algorithms that also drive engagement for normal users. Company documents reviewed by the Journal show that the company’s safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.
  • The test accounts showed that advertisements were regularly added to the problematic Reels streams. Ads encouraging users to visit Disneyland for the holidays ran next to a video of an adult acting out having sex with her father, and another of a young woman in lingerie with fake blood dripping from her mouth. An ad for Hims ran shortly after a video depicting an apparently anguished woman in a sexual situation along with a link to what was described as “the full video.”
  • Current and former Meta employees said in interviews that the tendency of Instagram algorithms to aggregate child sexualization content from across its platform was known internally to be a problem. Once Instagram pigeonholes a user as interested in any particular subject matter, they said, its recommendation systems are trained to push more related content to them.
  • Part of the problem is that automated enforcement systems have a harder time parsing video content than text or still images. Another difficulty arises from how Reels works: Rather than showing content shared by users’ friends, the way other parts of Instagram and Facebook often do, Reels promotes videos from sources they don’t follow
  • In an analysis conducted shortly before the introduction of Reels, Meta’s safety staff flagged the risk that the product would chain together videos of children and inappropriate content, according to two former staffers. Vaishnavi J, Meta’s former head of youth policy, described the safety review’s recommendation as: “Either we ramp up our content detection capabilities, or we don’t recommend any minor content,” meaning any videos of children.
  • At the time, TikTok was growing rapidly, drawing the attention of Instagram’s young users and the advertisers targeting them. Meta didn’t adopt either of the safety analysis’s recommendations at that time, according to J.
  • Stetson, Meta’s liaison with digital-ad buyers, disputed that Meta had neglected child safety concerns ahead of the product’s launch. “We tested Reels for nearly a year before releasing it widely, with a robust set of safety controls and measures,” she said. 
  • After initially struggling to maximize the revenue potential of its Reels product, Meta has improved how its algorithms recommend content and personalize video streams for users
  • Among the ads that appeared regularly in the Journal’s test accounts were those for “dating” apps and livestreaming platforms featuring adult nudity, massage parlors offering “happy endings” and artificial-intelligence chatbots built for cybersex. Meta’s rules are supposed to prohibit such ads.
  • The Journal informed Meta in August about the results of its testing. In the months since then, tests by both the Journal and the Canadian Centre for Child Protection show that the platform continued to serve up a series of videos featuring young children, adult content and apparent promotions for child sex material hosted elsewhere. 
  • As of mid-November, the center said Instagram is continuing to steadily recommend what the nonprofit described as “adults and children doing sexual posing.”
  • Meta hasn’t offered a timetable for resolving the problem or explained how in the future it would restrict the promotion of inappropriate content featuring children. 
  • The Journal’s test accounts found that the problem even affected Meta-related brands. Ads for the company’s WhatsApp encrypted chat service and Meta’s Ray-Ban Stories glasses appeared next to adult pornography. An ad for Lean In Girls, the young women’s empowerment nonprofit run by former Meta Chief Operating Officer Sheryl Sandberg, ran directly before a promotion for an adult sex-content creator who often appears in schoolgirl attire. Sandberg declined to comment. 
  • Through its own tests, the Canadian Centre for Child Protection concluded that Instagram was regularly serving videos and pictures of clothed children who also appear in the National Center for Missing and Exploited Children’s digital database of images and videos confirmed to be child abuse sexual material. The group said child abusers often use the images of the girls to advertise illegal content for sale in dark-web forums.
  • The nature of the content—sexualizing children without generally showing nudity—reflects the way that social media has changed online child sexual abuse, said Lianna McDonald, executive director for the Canadian center. The group has raised concerns about the ability of Meta’s algorithms to essentially recruit new members of online communities devoted to child sexual abuse, where links to illicit content in more private forums proliferate.
  • “Time and time again, we’ve seen recommendation algorithms drive users to discover and then spiral inside of these online child exploitation communities,” McDonald said, calling it disturbing that ads from major companies were subsidizing that process.
Javier E

Got Twitter? What's Your Influence Score - NYTimes.com - 1 views

  • IMAGINE a world in which we are assigned a number that indicates how influential we are. This number would help determine whether you receive a job, a hotel-room upgrade or free samples at the supermarket. If your influence score is low, you don’t get the promotion, the suite or the complimentary cookies.
  • it’s not enough to attract Twitter followers — you must inspire those followers to take action.
  • “Now you are being assigned a number in a very public way, whether you want it or not,”
  • ...7 more annotations...
  • “It’s going to be publicly accessible to the people you date, the people you work for. It’s fast becoming mainstream.”
  • Audi would begin offering promotions to Facebook users based on their Klout score. Last year, Virgin America used the company to offer highly rated influencers in Toronto free round-trip flights to San Francisco or Los Angeles.
  • If you have a Facebook, Twitter or LinkedIn account, you are already being judged — or will be soon. Companies with names like Klout, PeerIndex and Twitter Grader are in the process of scoring millions, eventually billions, of people on their level of influence — or in the lingo, rating “influencers.” Yet the companies are not simply looking at the number of followers or friends you’ve amassed. Rather, they are beginning to measure influence in more nuanced ways, and posting their judgments — in the form of a score — online.
  • focus your digital presence on one or two areas of interest. Don’t be a generalist. Most importantly: be passionate, knowledgeable and trustworthy.
  • As for influence in the offline world — it doesn’t count.
  • Klout “lacks sentiment analysis” — so a user who generates a lot of digital chatter might receive a high score even though what’s being said about the user is negative.
  • we are moving closer to creating “social media caste systems,” where people with high scores get preferential treatment by retailers, prospective employers, even prospective dates.
sissij

Home Inspectors on Their Weirdest Discoveries - The New York Times - 0 views

  • When a home is sold, its many secrets can come out of the closet. Brokers, potential buyers and home inspectors step inside properties that may have been completely private for years.
  • Sometimes, owners hide flaws in the hopes a buyer will miss an expensive problem. Other times, homeowners are caught completely unaware that, say, a family of raccoons has taken up residence in the chimney.
  • The buyer, who was supposed to put down a large deposit that afternoon, was livid. The seller’s broker tried to assure her that the problem could be easily fixed.
  •  
    I found this article very interesting as it talks about the job of home inspectors. Home inspectors are the middle man between seller and buyer to make sure that all the issues of the home and pricing of the rent is transparent and clear for both sides. It reminded me of the rating companies we talked about in economics. In the film "Inside Job", the rating company is supposed to give good guidance and create transparency between sellers and buyers. However, some rating companies failed to give honest advice to clients and this lack of information is one of the reason that causes the collapse of the economics. Every market need a responsible middleman to operate efficiently. --Sissi (3/24/2017)
Javier E

I worked at Facebook. I know how Cambridge Analytica could have happened. - The Washing... - 0 views

  • During my 16 months at Facebook, I called many developers and demanded compliance, but I don’t recall the company conducting a single audit of a developer where the company inspected the developer’s data storage. Lawsuits and outright bans were also very rare. I believe the reason for lax enforcement was simple: Facebook didn’t want to make the public aware of huge weaknesses in its data security.
  • Concerned about the lack of protection for users, in 2012 I created a PowerPoint presentation that outlined the ways that data vulnerabilities on Facebook Platform exposed people to harm, and the various ways the company was trying to protect that data. There were many gaps that left users exposed. I also called out potential bad actors, including data brokers and foreign state actors. I sent the document to senior executives at the company but got little to no response. I had no dedicated engineers assigned to help resolve known issues, and no budget for external vendors.
  • Facebook will argue that things have changed since 2012 and that the company has much better processes in place now. If that were true, Cambridge Analytica would be small side note, a developer that Facebook shut down and sued out of existence in December 2015 when word first got out that it had violated Facebook’s policies to acquire the data of millions. Instead, it appears Facebook used the same playbook that I saw in 2012.
  • ...1 more annotation...
  • In the wake of this catastrophic violation, Mark Zuckerberg must be forced to testify before Congress and should be held accountable for the negligence of his company. Facebook has systematically failed to enforce its own policies. The only solution is external oversight.
Javier E

George Soros: Facebook and Google a menace to society | Business | The Guardian - 0 views

  • Facebook and Google have become “obstacles to innovation” and are a “menace” to society whose “days are numbered”
  • “Mining and oil companies exploit the physical environment; social media companies exploit the social environment,” said the Hungarian-American businessman, according to a transcript of his speech.
  • “This is particularly nefarious because social media companies influence how people think and behave without them even being aware of it. This has far-reaching adverse consequences on the functioning of democracy, particularly on the integrity of elections.”
  • ...8 more annotations...
  • In addition to skewing democracy, social media companies “deceive their users by manipulating their attention and directing it towards their own commercial purposes” and “deliberately engineer addiction to the services they provide”. The latter, he said, “can be very harmful, particularly for adolescents”
  • There is a possibility that once lost, people who grow up in the digital age will have difficulty in regaining it. This may have far-reaching political consequences.”
  • Soros warned of an “even more alarming prospect” on the horizon if data-rich internet companies such as Facebook and Google paired their corporate surveillance systems with state-sponsored surveillance – a trend that’s already emerging in places such as the Philippines.
  • “This may well result in a web of totalitarian control the likes of which not even Aldous Huxley or George Orwell could have imagined,”
  • “The internet monopolies have neither the will nor the inclination to protect society against the consequences of their actions. That turns them into a menace and it falls to the regulatory authorities to protect society against them,
  • He also echoed the words of world wide web inventor Sir Tim Berners-Lee when he said the tech giants had become “obstacles to innovation” that need to be regulated as public utilities “aimed at preserving competition, innovation and fair and open universal access”.
  • Earlier this week, Salesforce’s chief executive, Marc Benioff, said that Facebook should be regulated like a cigarette company because it’s addictive and harmful.
  • In November, Roger McNamee, who was an early investor in Facebook, described Facebook and Google as threats to public health.
Javier E

Parents' Dilemma: When to Give Children Smartphones - WSJ - 0 views

  • Experience has already shown parents that ceding control over the devices has reshaped their children’s lives, allowing an outside influence on school work, friendships, recreation, sleep, romance, sex and free time.
  • Nearly 75% of teenagers had access to smartphones, concluded a 2015 study by Pew Research Center—unlocking the devices about 95 times a day on average,
  • They spent, on average, close to nine hours a day tethered to screens large
  • ...15 more annotations...
  • The more screen time, the more revenue.
  • The goal of Facebook Inc., Alphabet Inc.’s Google, Snap Inc. and their peers is to create or host captivating experiences that keep users glued to their screens, whether for Instagram, YouTube, Snapchat or Facebook
  • Snapchat users 25 and younger, for example, were spending 40 minutes a day on the app, Chief Executive Evan Spiegel said in August. Alphabet boasted to investors recently that YouTube’s 1.5 billion users were spending an average 60 minutes a day on mobile.
  • Facebook’s stock slid 4.5% to close at $179 Friday after CEO Mark Zuckerberg announced plans Thursday to overhaul the Facebook news feed in a way that could reduce the time users spend.
  • Tech companies are working to instill viewing habits earlier than ever. The number of users of YouTube Kids is soaring. Facebook recently launched Messenger Kids, a messaging app for children as young as 6.
  • Ms. Ho’s 16-year-old son, Brian is an Eagle Scout and chorister, who at times finds it hard to break away from online videogames, even at 3 a.m. The teen recently told his mother he thinks he is addicted. Ms. Ho’s daughter, Samantha, 14, also is glued to her device, in conversations with friends.
  • “You think you’re buying a piece of technology,” Ms. Shepardson said. “Now it’s like oxygen to her.”
  • Psychologists say social media creates anxiety among children when they are away from their phones—what they call “fear of missing out,” whether on social plans, conversations or damaging gossip teens worry could be about themselves.
  • About half the teens in a survey of 620 families in 2016 said they felt addicted to their smartphones. Nearly 80% said they checked the phones more than hourly and felt the need to respond instantly to messages
  • Children set up Instagram accounts under pseudonyms that friends but not parents recognize. Some teens keep several of these so-called Finsta accounts without their parents knowing.
  • An app called Secret Calculator looks and works like an iPhone calculator but doubles as a private vault to hide files, photos and videos.
  • Mr. Zuckerberg told investors late last year that Facebook planned to boost video offerings, noting that live video generates 10 times as many user interactions. Netflix Inc. chief executive Reed Hastings, said in April about the addictiveness of its shows that the company was “competing with sleep on the margins.”
  • Keeping children away from disturbing content, though, is easier than keeping them off their phones.
  • About 16% of the nation’s high-school students were bullied online in 2015, according to the U.S. Centers for Disease Control and Prevention. Children who are cyberbullied are three times more likely to contemplate suicide
  • Smartphones “bring the outside in,” said Ms. Ahn, whose husband works for a major tech company. “We want the family to be the center of gravity.”
Javier E

Don't Be Surprised About Facebook and Teen Girls. That's What Facebook Is. | Talking Po... - 0 views

  • First, set aside all morality. Let’s say we have a 16 year old girl who’s been doing searches about average weights, whether boys care if a girl is overweight and maybe some diets. She’s also spent some time on a site called AmIFat.com. Now I set you this task. You’re on the other side of the Facebook screen and I want you to get her to click on as many things as possible and spend as much time clicking or reading as possible. Are you going to show her movie reviews? Funny cat videos? Homework tips? Of course, not.
  • If you’re really trying to grab her attention you’re going to show her content about really thin girls, how their thinness has gotten them the attention of boys who turn out to really love them, and more diets
  • We both know what you’d do if you were operating within the goals and structure of the experiment.
  • ...17 more annotations...
  • This is what artificial intelligence and machine learning are. Facebook is a series of algorithms and goals aimed at maximizing engagement with Facebook. That’s why it’s worth hundreds of billions of dollars. It has a vast army of computer scientists and programmers whose job it is to make that machine more efficient.
  • the Facebook engine is designed to scope you out, take a psychographic profile of who you are and then use its data compiled from literally billions of humans to serve you content designed to maximize your engagement with Facebook.
  • Put in those terms, you barely have a chance.
  • Of course, Facebook can come in and say, this is damaging so we’re going to add some code that says don’t show this dieting/fat-shaming content but girls 18 and under. But the algorithms will find other vulnerabilities
  • So what to do? The decision of all the companies, if not all individuals, was just to lie. What else are you going to do? Say we’re closing down our multi-billion dollar company because our product shouldn’t exist?
  • why exactly are you creating a separate group of subroutines that yanks Facebook back when it does what it’s supposed to do particularly well? This, indeed, was how the internal dialog at Facebook developed, as described in the article I read. Basically, other executives said: Our business is engagement, why are we suggesting people log off for a while when they get particularly engaged?
  • what it makes me think about more is the conversations at Tobacco companies 40 or 50 years ago. At a certain point you realize: our product is bad. If used as intended it causes lung cancer, heart disease and various other ailments in a high proportion of the people who use the product. And our business model is based on the fact that the product is chemically addictive. Our product is getting people addicted to tobacco so that they no longer really have a choice over whether to buy it. And then a high proportion of them will die because we’ve succeeded.
  • . The algorithms can be taught to find and address an infinite numbers of behaviors. But really you’re asking the researchers and programmers to create an alternative set of instructions where Instagram (or Facebook, same difference) jumps in and does exactly the opposite of its core mission, which is to drive engagement
  • You can add filters and claim you’re not marketing to kids. But really you’re only ramping back the vast social harm marginally at best. That’s the product. It is what it is.
  • there is definitely an analogy inasmuch as what you’re talking about here aren’t some glitches in the Facebook system. These aren’t some weird unintended consequences that can be ironed out of the product. It’s also in most cases not bad actors within Facebook. It’s what the product is. The product is getting attention and engagement against which advertising is sold
  • How good is the machine learning? Well, trial and error with between 3 and 4 billion humans makes you pretty damn good. That’s the product. It is inherently destructive, though of course the bad outcomes aren’t distributed evenly throughout the human population.
  • The business model is to refine this engagement engine, getting more attention and engagement and selling ads against the engagement. Facebook gets that revenue and the digital roadkill created by the product gets absorbed by the society at large
  • Facebook is like a spectacularly profitable nuclear energy company which is so profitable because it doesn’t build any of the big safety domes and dumps all the radioactive waste into the local river.
  • in the various articles describing internal conversations at Facebook, the shrewder executives and researchers seem to get this. For the company if not every individual they seem to be following the tobacco companies’ lead.
  • Ed. Note: TPM Reader AS wrote in to say I was conflating Facebook and Instagram and sometimes referring to one or the other in a confusing way. This is a fair
  • I spoke of them as the same intentionally. In part I’m talking about Facebook’s corporate ownership. Both sites are owned and run by the same parent corporation and as we saw during yesterday’s outage they are deeply hardwired into each other.
  • the main reason I spoke of them in one breath is that they are fundamentally the same. AS points out that the issues with Instagram are distinct because Facebook has a much older demographic and Facebook is a predominantly visual medium. (Indeed, that’s why Facebook corporate is under such pressure to use Instagram to drive teen and young adult engagement.) But they are fundamentally the same: AI and machine learning to drive engagement. Same same. Just different permutations of the same dynamic.
Javier E

'The Power of One,' Facebook whistleblower Frances Haugen's memoir - The Washington Post - 0 views

  • When an internal group proposed the conditions under which Facebook should step in and take down speech from political actors, Zuckerberg discarded its work. He said he’d address the issue himself over a weekend. His “solution”? Facebook would not touch speech by any politician, under any circumstances — a fraught decision under the simplistic surface, as Haugen points out. After all, who gets to count as a politician? The municipal dogcatcher?
  • t was also Zuckerberg, she says, who refused to make a small change that would have made the content in people’s feeds less incendiary — possibly because doing so would have caused a key metric to decline.
  • When the Wall Street Journal’s Jeff Horwitz began to break the stories that Haugen helped him document, the most damning one concerned Facebook’s horrifyingly disingenuous response to a congressional inquiry asking if the company had any research showing that its products were dangerous to teens. Facebook said it wasn’t aware of any consensus indicating how much screen time was too much. What Facebook did have was a pile of research showing that kids were being harmed by its products. Allow a clever company a convenient deflection, and you get something awfully close to a lie.
  • ...5 more annotations...
  • after the military in Myanmar used Facebook to stoke the murder of the Rohingya people, Haugen began to worry that this was a playbook that could be infinitely repeated — and only because Facebook chose not to invest in safety measures, such as detecting hate speech in poorer, more vulnerable places. “The scale of the problems was so vast,” she writes. “I believed people were going to die (in certain countries, at least) and for no reason other than higher profit margins.”
  • After a trip to Cambodia, where neighbors killed neighbors in the 1970s because of a “story that labeled people who had lived next to each other for generations as existential threats,” she’d started to wonder about what caused people to turn on one another to such a horr
  • ifying degree. “How quickly could a story become the truth people perceived?”
  • she points out is the false choice posited by most social media companies: free speech vs. censorship. She argues that lack of transparency is what contributed most to the problems at Facebook. No one on the outside can see inside the algorithms. Even many of those on the inside can’t. “You can’t take a single academic course, anywhere in the world, on the tradeoffs and choices that go into building a social media algorithm or, more importantly, the consequences of those choices,” she writes.
  • In that lack of accountability, social media is a very different ecosystem than the one that helped Ralph Nader take on the auto industry back in the 1960s. Then, there was a network of insurers and plaintiff’s lawyers who also wanted change — and the images of mangled bodies were a lot more visible than what happens inside the mind of a teenage girl. But what if the government forced companies to share their inner workings in the same way it mandates that food companies disclose the nutrition in what they make? What if the government forced social media companies to allow academics and other researchers access to the algorithms they use?
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

Musk, SBF, and the Myth of Smug, Castle-Building Nerds - 0 views

  • Experts in content moderation suggested that Musk’s actual policies lacked any coherence and, if implemented, would have all kinds of unintended consequences. That has happened with verification. Almost every decision he makes is an unforced error made with extreme confidence in front of a growing audience of people who already know he has messed up, and is supported by a network of sycophants and blind followers who refuse to see or tell him that he’s messing up. The dynamic is … very Trumpy!
  • As with the former president, it can be hard at times for people to believe or accept that our systems are so broken that a guy who is clearly this inept can also be put in charge of something so important. A common pundit claim before Donald Trump got into the White House was that the gravity of the job and prestige of the office might humble or chasten him.
  • The same seems true for Musk. Even people skeptical of Musk’s behavior pointed to his past companies as predictors of future success. He’s rich. He does smart-people stuff. The rockets land pointy-side up!
  • ...18 more annotations...
  • Time and again, we learned there was never a grand plan or big ideas—just weapons-grade ego, incompetence, thin skin, and prejudice against those who don’t revere him.
  • Despite all the incredible, damning reporting coming out of Twitter and all of Musk’s very public mistakes, many people still refuse to believe—even if they detest him—that he is simply incompetent.
  • What is amazing about the current moment is that, despite how ridiculous it all feels, a fundamental tenet of reality and logic appears to be holding true: If you don’t know what you’re doing or don’t really care, you’ll run the thing you’re in charge of into the ground, and people will notice.
  • And so the moment feels too dumb and too on the nose to be real and yet also very real—kind of like all of reality in 2022.
  • I don’t really know where any of this will lead, but one interesting possibility is that Musk gets increasingly reactionary and trollish in his politics and stewardship of Twitter.
  • Leaving the politics aside, from a basic customer-service standpoint this is generally an ill-advised way for the owner of a company to treat an elected official when that elected official wishes to know why your service has failed them. The reason it is ill-advised is because then the elected official could tweet something like what Senator Markey tweeted on Sunday: “One of your companies is under an FTC consent decree. Auto safety watchdog NHTSA is investigating another for killing people. And you’re spending your time picking fights online. Fix your companies. Or Congress will.”
  • It seems clear that Musk, like any dedicated social-media poster, thrives on validation, so it makes sense that, as he continues to dismantle his own mystique as an innovator, he might look for adoration elsewhere
  • Recent history has shown that, for a specific audience, owning the libs frees a person from having to care about competency or outcome of their actions. Just anger the right people and you’re good, even if you’re terrible at your job. This won’t help Twitter’s financial situation, which seems bleak, but it’s … something!
  • Bankman-Fried, the archetype, appealed to people for all kinds of reasons. His narrative as a philanthropist, and a smart rationalist, and a stone-cold weirdo was something people wanted to buy into because, generally, people love weirdos who don’t conform to systems and then find clever ways to work around them and become wildly successful as a result.
  • Bankman-Fried was a way that a lot of people could access and maybe obliquely understand what was going on in crypto. They may not have understood what FTX did, but they could grasp a nerd trying to leverage a system in order to do good in the world and advance progressive politics. In that sense, Bankman-Fried is easy to root for and exciting to cover. His origin story and narrative become more important than the particulars of what he may or may not be doing.
  • the past few weeks have been yet another reminder that the smug-nerd-genius narrative may sell magazines, and it certainly raises venture funding, but the visionary founder is, first and foremost, a marketing product, not a reality. It’s a myth that perpetuates itself. Once branded a visionary, the founder can use the narrative to raise money and generate a formidable net worth, and then the financial success becomes its own résumé. But none of it is real.
  • Adversarial journalism ideally questions and probes power. If it is trained on technology companies and their founders, it is because they either wield that power or have the potential to do so. It is, perhaps unintuitively, a form of respect for their influence and potential to disrupt. But that’s not what these founders want.
  • even if all tech coverage had been totally flawless, Silicon Valley would have rejected adversarial tech journalism because most of its players do not actually want the responsibility that comes with their potential power. They want only to embody the myth and reap the benefits. They want the narrative, which is focused on origins, ambitions, ethos, and marketing, and less on the externalities and outcomes.
  • Looking at Musk and Bankman-Fried, it would appear that the tech visionaries mostly get their way. For all the complaints of awful, negative coverage and biased reporting, people still want to cheer for and give money to the “‘smug nerds building castles in the sky.’” Though they vary wildly right now in magnitude, their wounds are self-inflicted—and, perhaps, the result of believing their own hype.
  • That’s because, almost always, the smug-nerd-genius narrative is a trap. It’s one that people fall into because they need to believe that somebody out there is so brilliant, they can see the future, or that they have some greater, more holistic understanding of the world (or that such an understanding is possible)
  • It’s not unlike a conspiracy theory in that way. The smug-nerd-genius narrative helps take the complexity of the world and make it more manageable.
  • Putting your faith in a space billionaire or a crypto wunderkind isn’t just sad fanboydom; it is also a way for people to outsource their brain to somebody else who, they believe, can see what they can’t
  • the smug nerd genius is exceedingly rare, and, even when they’re not outed as a fraud or a dilettante, they can be assholes or flawed like anyone else. There aren’t shortcuts for making sense of the world, and anyone who is selling themselves that way or buying into that narrative about them should read to us as a giant red flag.
Javier E

All the (open) world's a stage: how the video game Fallout became a backdrop for live S... - 0 views

  • The Wasteland Theatre Company is not your average band of thespians. Dotted all across the world, they meet behind their keyboards to perform inside Fallout 76, a video game set in a post-nuclear apocalyptic America. The Fallout series is one of gaming’s most popular, famous for encouraging players to role-play survivors within the oddly beautiful ruins of alternate-history Earth
  • “Imagine a wandering theatre troupe in the 17th century going from town to town doing little performances,” says the company’s director, Northern_Harvest, who goes by his gamertag or just ‘North’, and works in communications in real life. “It’s not a new idea; we’re just doing it within the brand new medium of a video game.”
  • The company was formed almost by chance, when North befriended a group of players in the wasteland. As they adventured together, they noticed that the Fallout games are peppered with references to Shakespeare’s works.
  • ...5 more annotations...
  • “The Fallout universe lends itself really well to Shakespeare. It’s very desolate, very grotesque, very tragic, really,” says North. In this world, Shakespeare existed before the bombs fell, so it seemed logical that North and his friends could role-play a company keeping culture alive in the ruins of civilisation – like the troupe of actors in Emily St John Mandel’s post-apocalyptic dystopia, Station Eleven.
  • It takes months to pull a show together. First, North picks the play and adapts it. Hundreds of pages of script are shared with the crew, so set design and rehearsals can commence. “It’s just like a real theatre company, where you start with an idea and a few folks sitting together and figuring out what our season is going to look like,”
  • There are no ticketed seats, and the company makes no money. The majority of audiences stumble across the performances accidentally in the wasteland, and sit to watch the show for free – or tune in on Twitch, where the company broadcasts every performance live
  • n 2022 Fallout 76 claimed to have over 13.5 million players, some of whom North believes “may never have seen a Shakespeare play. Ninety-nine per cent of those who find us sit down and quietly watch the show … It’s really quite moving, performing for people who might not go to the theatre in their own communities or haven’t thought about Shakespeare since high school. We are tickled silly knowing that we are potentially reaching new, untapped audiences and (re)introducing Shakespeare to so many. I hope Shakespeare academics who study comparative drama will take note of our use of this new medium to reach new audiences
  • “I think we’re a perfect example of how video games inspire creativity, and celebrate theatre and culture and the arts. I hope that other gamers out there know that there’s so much potential for you to be able to express what you’re passionate about in video games.”
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

GPT-4 has arrived. It will blow ChatGPT out of the water. - The Washington Post - 0 views

  • GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands.
  • When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.
  • an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things
  • ...22 more annotations...
  • hose promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.
  • Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabilities.”
  • A person could upload an image and GPT-4 could caption it for them, describing the objects and scene.
  • AI language models often confidently offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.
  • GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.
  • Microsoft has invested billions of dollars in OpenAI in the hope its technology will become a secret weapon for its workplace software, search engine and other online ambitions. It has marketed the technology as a super-efficient companion that can handle mindless work and free people for creative pursuits, helping one software developer to do the work of an entire team or allowing a mom-and-pop shop to design a professional advertising campaign without outside help.
  • it could lead to business models and creative ventures no one can predict.
  • sparked criticism that the companies are rushing to exploit an untested, unregulated and unpredictable technology that could deceive people, undermine artists’ work and lead to real-world harm.
  • the company held back the feature to better understand potential risks. As one example, she said, the model might be able to look at an image of a big group of people and offer up known information about them, including their identities — a possible facial recognition use case that could be used for mass surveillance.
  • OpenAI researchers wrote, “As GPT-4 and AI systems like it are adopted more widely,” they “will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in.”
  • “We can agree as a society broadly on some harms that a model should not contribute to,” such as building a nuclear bomb or generating child sexual abuse material, she said. “But many harms are nuanced and primarily affect marginalized groups,” she added, and those harmful biases, especially across other languages, “cannot be a secondary consideration in performance.”
  • OpenAI said its new model would be able to handle more than 25,000 words of text, a leap forward that could facilitate longer conversations and allow for the searching and analysis of long documents.
  • OpenAI developers said GPT-4 was more likely to provide factual responses and less likely to refuse harmless requests
  • Duolingo, the language learning app, has already used GPT-4 to introduce new features, such as an AI conversation partner and a tool that tells users why an answer was incorrect.
  • The company did not share evaluations around bias that have become increasingly common after pressure from AI ethicists.
  • GPT-4 will have competition in the growing field of multisensory AI. DeepMind, an AI firm owned by Google’s parent company Alphabet, last year released a “generalist” model named Gato that can describe images and play video games. And Google this month released a multimodal system, PaLM-E, that folded AI vision and language expertise into a one-armed robot on wheels: If someone told it to go fetch some chips, for instance, it could comprehend the request, wheel over to a drawer and choose the right bag.
  • The systems, though — as critics and the AI researchers are quick to point out — are merely repeating patterns and associations found in their training data without a clear understanding of what it’s saying or when it’s wrong.
  • GPT-4, the fourth “generative pre-trained transformer” since OpenAI’s first release in 2018, relies on a breakthrough neural-network technique in 2017 known as the transformer that rapidly advanced how AI systems can analyze patterns in human speech and imagery.
  • The systems are “pre-trained” by analyzing trillions of words and images taken from across the internet: news articles, restaurant reviews and message-board arguments; memes, family photos and works of art.
  • Giant supercomputer clusters of graphics processing chips are mapped out their statistical patterns — learning which words tended to follow each other in phrases, for instance — so that the AI can mimic those patterns, automatically crafting long passages of text or detailed images, one word or pixel at a time.
  • In 2019, the company refused to publicly release GPT-2, saying it was so good they were concerned about the “malicious applications” of its use, from automated spam avalanches to mass impersonation and disinformation campaigns.
  • Altman has also marketed OpenAI’s vision with the aura of science fiction come to life. In a blog post last month, he said the company was planning for ways to ensure that “all of humanity” benefits from “artificial general intelligence,” or AGI — an industry term for the still-fantastical idea of an AI superintelligence that is generally as smart as, or smarter than, the humans themselves.
sissij

Apple removed the New York Times app in China. Why now? - LA Times - 0 views

  • California’s Internet companies may have once dreamed of liberating China through technology, but these days they seem more willing than ever to play the Communist Party's game
  • "This is a restoration of the Cultural Revolution or another historical retrogression," said another.
  • The Washington Post is one of many Western newspapers that carries a regular paid supplement by China Daily, another Communist Party mouthpiece.
  •  
    The censorship in China has long been a controversial issue that's widely discussed. I think it's very natural that those internet companies play the government the rules to enter the Chinese market because most companies are profit-driven rather than being all idealistic. However, I found the tone of this article very uncomfortable. He used the word "liberating" as if China is in bad condition or great suffer that need American freedom to save. Also, looking back to American history, American heroism plays a big part in what the American government did. They "liberated" Canada, "liberated" Vietnam, "liberated" Pakistan, and now America tries to "liberate" China. However, they never fully understand the pros and cons of censorship and how China is a totally different country to America. One's medicine can toxic for another. --Sissi (1/6/2017)
‹ Previous 21 - 40 of 557 Next › Last »
Showing 20 items per page