Skip to main content

Home/ History Readings/ Group items tagged zuckerberg

Rss Feed Group items tagged

Javier E

Alex Stamos, Facebook Data Security Chief, To Leave Amid Outcry - The New York Times - 0 views

  • One central tension at Facebook has been that of the legal and policy teams versus the security team. The security team generally pushed for more disclosure about how nation states had misused the site, but the legal and policy teams have prioritized business imperatives, said the people briefed on the matter.
  • “The people whose job is to protect the user always are fighting an uphill battle against the people whose job is to make money for the company,” said Sandy Parakilas, who worked at Facebook enforcing privacy and other rules until 2012 and now advises a nonprofit organization called the Center for Humane Technology, which is looking at the effect of technology on people.
  • Mr. Stamos said in statement on Monday, “These are really challenging issues, and I’ve had some disagreements with all of my colleagues, including other executives.” On Twitter, he said he was “still fully engaged with my work at Facebook” and acknowledged that his role has changed, without addressing his future plans.
  • ...13 more annotations...
  • Mr. Stamos joined Facebook from Yahoo in June 2015. He and other Facebook executives, such as Ms. Sandberg, disagreed early on over how proactive the social network should be in policing its own platform, said the people briefed on the matter.
  • Mr. Stamos first put together a group of engineers to scour Facebook for Russian activity in June 2016, the month the Democratic National Committee announced it had been attacked by Russian hackers, the current and former employees said.
  • By November 2016, the team had uncovered evidence that Russian operatives had aggressively pushed DNC leaks and propaganda on Facebook. That same month, Mr. Zuckerberg publicly dismissed the notion that fake news influenced the 2016 election, calling it a “pretty crazy idea
  • In the ensuing months, Facebook’s security team found more Russian disinformation and propaganda on its site, according to the current and former employees. By the spring of 2017, deciding how much Russian interference to disclose publicly became a major source of contention within the company.
  • A detailed memorandum Mr. Stamos wrote in early 2017 describing Russian interference was scrubbed for mentions of Russia and winnowed into a blog post last April that outlined, in hypothetical terms, how Facebook could be manipulated by a foreign adversary, they said. Russia was only referenced in a vague footnote. That footnote acknowledged that Facebook’s findings did not contradict a declassified January 2017 report in which the director of national intelligence concluded Russia had sought to undermine United States election, and Hillary Clinton in particular.
  • Mr. Stamos pushed to disclose as much as possible, while others including Elliot Schrage, Facebook’s vice president of communications and policy, recommended not naming Russia without more ironclad evidence, said the current and former employees.
  • By last September, after Mr. Stamos’s investigation had revealed further Russian interference, Facebook was forced to reverse course. That month, the company disclosed that beginning in June 2015, Russians had paid Facebook $100,000 to run roughly 3,000 divisive ads to show the American electorate.
  • The public reaction caused some at Facebook to recoil at revealing more, said the current and former employees. Since the 2016 election, Facebook has paid unusual attention to the reputations of Mr. Zuckerberg and Ms. Sandberg, conducting polls to track how they are viewed by the public, said Tavis McGinn, who was recruited to the company last April and headed the executive reputation efforts through September 2017.
  • Mr. McGinn, who now heads Honest Data, which has done polling about Facebook’s reputation in different countries, said Facebook is “caught in a Catch-22.”
  • “Facebook cares so much about its image that the executives don’t want to come out and tell the whole truth when things go wrong,” he said. “But if they don’t, it damages their image.”
  • Mr. McGinn said he left Facebook after becoming disillusioned with the company’s conduct.
  • By December 2017, Mr. Stamos, who reports to Facebook’s general counsel, proposed that he report directly to higher-ups. Facebook executives rejected that proposal and instead reassigned Mr. Stamos’s team, splitting the security team between its product team, overseen by Guy Rosen, and infrastructure team, overseen by Pedro Canahuati, according to current and former employees.
  • “I told them, ‘Your business is based on trust, and you’re losing trust,’” said Mr. McNamee, a founder of the Center for Humane Technology. “They were treating it as a P.R. problem, when it’s a business problem. I couldn’t believe these guys I once knew so well had gotten so far off track.”
Javier E

Opinion | Facebook's Unintended Consequence - The New York Times - 0 views

  • The deeper problem is the overwhelming concentration of technical, financial and moral power in the hands of people who lack the training, experience, wisdom, trustworthiness, humility and incentives to exercise that power responsibly.
  • Now Facebook wants to refurbish its damaged reputation by promising its users much more privacy via encrypted services as well as more aggressively policing hate speech on the site
  • This is what Alex Stamos, Facebook’s former chief security officer, called “the judo move: In a world where everything is encrypted and doesn’t last long, entire classes of scandal are invisible to the media.”
  • ...4 more annotations...
  • it’s a cynical exercise in abdication dressed as an act of responsibility. Knock a few high-profile bigots down. Throw a thick carpet over much of the rest. Then figure out how to extract a profit from your new model.
  • On the one hand, Facebook will be hosting the worst kinds of online behavior. In a public note in March, Zuckerberg admitted that encryption will help facilitate “truly terrible things like child exploitation, terrorism, and extortion.” (For that, he promised to “work with law enforcement.” Great.)
  • On the other hand, Facebook is completing its transition from being a simple platform, broadly indifferent to the content it hosts, to being a publisher that curates and is responsible for content.
  • the decision to absolutely ban certain individuals will always be a human one. It will inevitably be subjective.
Javier E

When a stranger takes your face: Facebook's failed crackdown on fake accounts - The Was... - 0 views

  • After The Post presented Facebook with a list of numerous fake accounts, the company revealed that its system is much less effective than previously advertised: The tool looks only for impostors within a user’s circle of friends and friends of friends — not the site’s 2 billion-user network, where the vast majority of doppelganger accounts are probably born.
  • But the fakes highlight how the company is struggling to use the technology to fulfill its most basic mission — connecting real people around the world.
  • The limited scale of Facebook’s central technical solution to the fake-account mess also suggests that the site is failing in its pledge to protect users’ personal information, while still urging them to hand over more photos and consent to their broader use.
  • ...9 more annotations...
  • The number of what Facebook calls “undesirable” accounts is growing rapidly. The company estimates that there were as many as 87 million fake accounts in the last quarter, according to financial filings — a dramatic jump over 2016, when an estimated 18 million accounts were fake.
  • Facebook’s failure to spot obvious counterfeit accounts has highlighted one of the company’s more embarrassing public ills. During chief executive Mark Zuckerberg’s hearing before a Senate committee last month, Sen. Christopher A. Coons (D-Del.) said that his friends — including old classmates from law school and Delaware’s attorney general — had alerted him that morning to a fake Facebook account
  • “Isn’t it Facebook’s job to better protect its users?” Coons asked Zuckerberg. “And why do you shift the burden to users to flag inappropriate content and make sure it’s taken down?”
  • Zuckerberg responded in the hearing that “it’s clear that this is an area . . . we need to do a lot better on.” He added: “Over time, we’re going to shift increasingly to a method where more of this content is flagged up front by AI tools that we develop.
  • The site is using that technological promise to encourage more users to consent to expanded ­facial-recognition rules. In its new privacy settings revealed last month, users are told, “If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you.” Facebook users who want to avoid impersonation but not have their name suggested for tagging in someone else’s photo are not allowed the choice.
  • But in the months since that feature was announced, scam profiles that took the names, photos and other information from legitimate accounts continued to spread
  • Some critics question why a $500 billion company with so many top engineers still struggles to protect its users’ identities
  • Many of the fake accounts appear to be built by copying, or “scraping,” the photos and biographical details from users’ Facebook profiles.
  • Analysts say the site could face an existential threat if unnerved users shy away from posting photos there for good. “There is some skepticism that they know where all of the fakes are,” said Brian Wieser, a senior analyst at Pivotal Research
Javier E

How a half-educated tech elite delivered us into evil | John Naughton | Opinion | The G... - 0 views

  • We have a burgeoning genre of “OMG, what have we done?” angst coming from former Facebook and Google employees who have begun to realise that the cool stuff they worked on might have had, well, antisocial consequences.
  • what Google and Facebook have built is a pair of amazingly sophisticated, computer-driven engines for extracting users’ personal information and data trails, refining them for sale to advertisers in high-speed data-trading auctions that are entirely unregulated and opaque to everyone except the companies themselves.
  • The purpose of this infrastructure was to enable companies to target people with carefully customised commercial messages
  • ...10 more annotations...
  • in doing this, Zuckerberg, Google co-founders Larry Page and Sergey Brin and co wrote themselves licences to print money and build insanely profitable companies.
  • It never seems to have occurred to them that their advertising engines could also be used to deliver precisely targeted ideological and political messages to voters.
  • Hence the obvious question: how could such smart people be so stupid? The cynical answer is they knew about the potential dark side all along and didn’t care, because to acknowledge it might have undermined the aforementioned licences to print money.
  • Which is another way of saying that most tech leaders are sociopaths. Personally I think that’s unlikely
  • So what else could explain the astonishing naivety of the tech crowd? My hunch is it has something to do with their educational backgrounds. Take the Google co-founders. Sergey Brin studied mathematics and computer science. His partner, Larry Page, studied engineering and computer science. Zuckerberg dropped out of Harvard, where he was studying psychology and computer science, but seems to have been more interested in the latter.
  • Now mathematics, engineering and computer science are wonderful disciplines – intellectually demanding and fulfilling. And they are economically vital for any advanced society. But mastering them teaches students very little about society or history – or indeed about human nature.
  • As a consequence, the new masters of our universe are people who are essentially only half-educated. They have had no exposure to the humanities or the social sciences, the academic disciplines that aim to provide some understanding of how society works, of history and of the roles that beliefs, philosophies, laws, norms, religion and customs play in the evolution of human culture.
  • “a liberal arts major familiar with works like Alexis de Tocqueville’s Democracy in America, John Stuart Mill’s On Liberty, or even the work of ancient Greek historians, might have been able to recognise much sooner the potential for the ‘tyranny of the majority’ or other disconcerting sociological phenomena that are embedded into the very nature of today’s social media platforms.
  • While seemingly democratic at a superficial level, a system in which the lack of structure means that all voices carry equal weight, and yet popularity, not experience or intelligence, actually drives influence, is clearly in need of more refinement and thought than it was first given.”
  • All of which brings to mind CP Snow’s famous Two Cultures lecture, delivered in Cambridge in 1959, in which he lamented the fact that the intellectual life of the whole of western society was scarred by the gap between the opposing cultures of science and engineering on the one hand, and the humanities on the other – with the latter holding the upper hand among contemporary ruling elites.
Javier E

Facebook Can't Be Fixed. - NewCo Shift - 0 views

  • the only way to “fix” Facebook is to utterly rethink its advertising model. It’s this model which has created nearly all the toxic externalities Zuckerberg is worried about: It’s the honeypot which drives the economics of spambots and fake news, it’s the at-scale algorithmic enabler which attracts information warriors from competing nation states, and it’s the reason the platform has become a dopamine-driven engagement trap where time is often not well spent.
  • Zuckerberg does the equivalent of dropping corporate acid and realizes the only way to fix Facebook is to make a massive, systemic change.
  • He orders his team to redesign the entire Facebook product suite around a new True North: No longer will his company be driven by engagement and data collection, but rather by whether or not individual users report that they are happier after using the service.
  • ...1 more annotation...
  • shift from an audience model (deep data, specific to each individual) to a contextual model (not buying people, but buying the context in which those people are engaging). And given that most of an individual’s context on Facebook has to do with engaging with friends and family, well, ad inventory plunges.
rerobinson03

Opinion | Facebook Is Better Without Trump - The New York Times - 0 views

  • Mr. Zuckerberg has said that it’s not the company’s job to “be arbiters of truth” and that allowing posts from well-known people allows the public to make informed decisions. Yet every day Facebook blocks or deletes posts from Average Joes who violate its policies, including propagating untruths and hateful speech.
  • If the oversight board were to restore Mr. Trump’s account, it would stand as an affirmation of Facebook’s self-serving policies permitting the most divisive and engaging content to remain and a clarion call to leaders like Rodrigo Duterte and Jair Bolsonaro, who have similarly peddled in misinformation, to keep on posting.
  • Even two years into Mr. Trump’s term, Facebook admitted it hadn’t done enough to prevent its site from being used “to foment division and incite offline violence.” But nothing much changed.
  • ...2 more annotations...
  • Facebook and other social media sites’ caution about taking down posts or accounts in democratic elections may be understandable, but prominent people are more likely to be believed, which is why the company’s standards should be higher for them, not the other way around.
  • ut the law is clear that Facebook is exercising its own First Amendment rights to regulate speech on its own site, including from the president. Sadly, it took four years of Mr. Trump’s divisive posts and bald attempts to undermine our democracy — not to mention a new administration — for Facebook to act on that.
Javier E

How key Republicans inside Facebook are shifting its politics to the right | Technology... - 0 views

  • David Brock, founder and chairman of Media Matters for America, a progressive media watchdog, said: “Mark Zuckerberg continues to kowtow to the right and rightwing criticism. It began when he met with a bunch of rightwingers in May 2016 and then Facebook changed its algorithm policies and we saw a lot of fake news as a result.
  • “I think there’s a consistent pattern of Zuckerberg and the Breitbart issue is the most recent one where the right is able to make false claims of conservative bias on Facebook and then he bends over backwards to accommodate that criticism.”
  • The Republican strain in Facebook was highlighted in a recent edition of the Popular Information newsletter, which stated that the top three leaders in the company’s Washington office are veteran party operatives. “Facebook’s DC office ensures that the company’s content policies meet the approval of Republicans in Congress,” Popular Information said
  • ...5 more annotations...
  • oel Kaplan, vice-president of global public policy at Facebook, manages the company’s relationships with policymakers around the world. A former law clerk to archconservative justice Antonin Scalia on the supreme court, he served as deputy chief of staff for policy under former president George W Bush from 2006 to 2009, joining Facebook two years later
  • Warren noted on Twitter this week: “Since he was hired, Facebook spent over $71 million on lobbying—nearly 100 times what it had spent before Kaplan joined.”
  • Kaplan has reportedly advocated for rightwing sites such as Breitbart and the Daily Caller, which earlier this year became a partner in Facebook’s factchecking program. Founded by Fox News’s Tucker Carlson, the Daily Caller is pro-Trump, anti-immigrant and widely criticised for the way it reported on a fake nude photo of the Democratic congresswoman Alexandria Ocasio-Cortez.
  • Facebook’s Washington headquarters also includes Kevin Martin, vice-president of US public policy and former chairman, under Bush, of the Federal Communications Commission – where a congressional report said his “heavy-handed, opaque and non-collegial management style … created distrust, suspicion and turmoil”
  • Katie Harbath, the company’s public policy director for global elections, led digital strategy for Rudy Giuliani’s 2008 presidential campaign and the Republican National Committee. She has been the principal defender of the company’s decision to allow political advert
Javier E

America's billionaires take center stage in national politics, colliding with populist ... - 0 views

  • The political and economic power wielded by the approximately 750 wealthiest people in America has become a sudden flash point in the 2020 presidential election
  • The populist onslaught has ensnared Facebook founder Mark Zuckerberg and Microsoft co-founder Bill Gates, led to billionaire hand-wringing on cable news, and sparked a panicked discussion among wealthy Americans and their financial advisers about how to prepare for a White House controlled by populist Democrats.
  • With the stock market at an all-time high, the debate about wealth accumulation and inequality has become a top issue in the 2020 campaign
  • ...14 more annotations...
  • “For the first time ever, we are having a national political conversation about billionaires in American life. And that is because many people are noticing the vast differences in wealth and opportunity,
  • Financial disparities between the rich and everyone else have widened over the past several decades in America, with inequality returning to levels not seen since the 1920s, as the richest 400 Americans now control more wealth than the bottom 60 percent of the wealth distribution
  • The poorest 60 percent of America has seen its share of the national wealth fall from 5.7 percent in 1987 to 2.1 percent in 2014, Zucman found.
  • At least 16 billionaires have in recent months spoken out against what they regard as the danger posed by the populist Democrats, particularly over their proposals to enact a “wealth tax” on vast fortunes, with many expressing concern they will blow the election to Trump by veering too far left.
  • Steyer has proposed his own wealth tax, but Schultz ripped the idea as “ridiculous,” while Bloomberg suggested it was not constitutional and raised the prospect of America turning into Venezuela.
  • Zuckerberg suggested Sanders’s call to abolish billionaires could hurt philanthropies and scientific research by giving the government too much decision-making power. Microsoft co-founder Gates criticized Warren’s wealth tax and mused about its impact on “the incentive system” for making money.
  • David Rubenstein, the billionaire co-founder of the Carlyle Group, told CNBC that a wealth tax would not “solve all of our society’s problems” and raised questions about its practicality. Also appearing on CNBC, billionaire investor Leon Cooperman choked up while discussing the impact a wealth tax could have on his family.
  • America has long had rich people, but economists say the current scale of inequality may be without precedent. The number of billionaires in America swelled to 749 in 2018, a nearly 5 percent jump, and they now hold close to $4 trillion collectively.
  • “The hyper concentration of wealth within the top 0.1 percent is a mortal threat to the American economy and way of life,” Boyle said in an interview. “If you work hard and play by the rules, then you should be able to get ahead. But the recent and unprecedented shift of resources to billionaires threatens this. A wealth tax on billionaires is fair and, indeed, necessary.”
  • “A lot of people in the Wall Street crowd still think the world is top-down,” Wylde said. “They think the people at the top of the pecking order are still making the decisions or driving the debate, as opposed to the new reality of grass-roots mobilization. They don’t realize the way pushback to their criticism goes viral.”
  • Lance Drucker, president and CEO of Drucker Wealth Management, said he has recently heard alarm from many of his millionaire clients over plans like Warren’s to implement a wealth tax on fortunes worth more than $50 million.
  • “Honestly, it’s only been the last month when people started getting worried,” said Drucker in an October interview. “These tax proposals are scaring the bejeezus out of people who have accumulated a lot of wealth.”
  • Some financial planners are urging wealthy clients to transfer millions to their offspring now, before Democrats again raise estate taxes. Attorneys have begun looking at whether a divorce could help the super-rich avoid the wealth tax. And some wealthy people are asking whether they should consider renouncing their U.S. citizenship and moving to Europe or elsewhere abroad ahead of Democrats’ potential tax hikes.
  • he has heard discussions about leaving the country and renouncing citizenship or other legal tax planning moves due to Democrats’ tax plans from several multimillionaires. “As the frustration mounts and tax burdens rise, people will consider it, just the way you have New Yorkers moving to Florida.
Javier E

Opinion | The Real Reason Facebook Won't Fact-Check Political Ads - The New York Times - 0 views

  • Facebook’s decision to refrain from policing the claims of political ads is not unreasonable. But the company’s officers have been incompetent at explaining and defending this decision.
  • If Facebook’s leaders were willing to level with us, they would stop defending themselves by appealing to lofty values like free speech
  • They would focus instead on more practical realities: Facebook is incapable of vetting political ads effectively and consistently at the global scale. And political ads are essential to maintaining the company’s presence in countries around the world.
  • ...18 more annotations...
  • The truth or falsity of most political ads is not so easy.
  • During Game 7 of the World Series on Wednesday, the Trump campaign ran a television ad claiming that he has created six million jobs and half a million manufacturing jobs. Is that statement true or false? Was there a net gain of 500,000 more manufacturing jobs in the United States since Jan. 20, 2017? Or is that a gross number, waiting to be reduced by some number of manufacturing jobs lost?
  • Is the ad’s use of the active voice, saying that President Trump is creating those jobs, honest? Is Mr. Trump directly responsible? Or did the momentum of the economic recovery since 2010 push manufacturers to add those positions? Should Facebook block the ad if one of seven claims is false? Vetting such claims takes time and effort, and might not be possible at all.
  • Facebook could also defend political ads by conceding that it must continue the practice to maintain its status and markets
  • Ad fact-checking can’t be done consistently in the United States. It definitely can’t be done at a global scale — 2.8 billion users of all Facebook-owned services posting in more than 100 languages
  • Given the task of policing for truth on Facebook, it’s unrealistic and simplistic to demand veracity from a system that is too big to govern.
  • Might Facebook ban political ads altogether, like Twitter has? Mr. Zuckerberg could concede that it’s not an easy task. What’s not political? If an ad calling for a carbon tax is political, is an ad promoting the reputation of an oil company political?
  • imagine Facebook’s contracted fact checkers doing that sort of research and interrogation for millions of ads from 22 presidential candidates in the United States, from candidates for 35 Senate seats, 435 House of Representatives seats and thousands of state legislative races.
  • Those are the false positives we know of. We have no idea how many false negatives Facebook has let slip through.
  • Over all, Facebook has no incentive to stop carrying political ads. Its revenue keeps growing despite a flurry of scandals and mistakes. So its leaders would lose little by being straight with the public about its limitations and motives. But they won’t. They will continue to defend their practices in disingenuous ways until we force them to change their ways.
  • We should know better than to demand of Facebook’s leaders that they do what is not in the best interests of the company. Instead, citizens around the world should demand effective legislation that can curb Facebook’s power.
  • The key is to limit data collection and the use of personal data to ferry ads and other content to discrete segments of Facebook users — the very core of the Facebook business model.
  • here’s something Congress could do: restrict the targeting of political ads in any medium to the level of the electoral district of the race. Tailoring messages for African-American voters, men or gun enthusiasts would still be legal, as this rule would not govern content. But people not in those groups would see those tailored messages as well and could learn more about their candidates.
  • Currently, two people in the same household can receive different ads from the same candidate running for state senate. That means a candidate can lie to one or both voters and they might never know about the other’s ads. This data-driven obscurity limits accountability and full deliberation.
  • A reason to be concerned about false claims in ads is that Facebook affords us so little opportunity to respond to ads not aimed at us personally. This proposal would limit that problem.
  • The overall regulatory goal should be to install friction into the system of targeted digital political ads
  • This process would not be easy, as political incumbents and powerful corporations that sell targeted ads (not just Facebook and Google, but also Verizon, AT&T, Comcast and The New York Times, for example) are invested in the status quo.
  • We can’t expect corporate leaders to do anything but lead their corporations. We can’t expect them to be honest with us, either. We must change their businesses for them so they stop undermining our democracies.
ethanshilling

Democrats call for more regulation of the tech industry. - The New York Times - 0 views

  • several Democratic lawmakers blamed Mark Zuckerberg of Facebook and Jack Dorsey of Twitter for a surge of hate speech and election disinformation after the election.
  • Senator Richard Blumenthal of Connecticut called for tougher data privacy laws, changes to a law that gives the companies legal protection for content posted by users, and greater antitrust action.
  • “You have built terrifying tools of persuasion and manipulation — with power far exceeding the robber barons of the last Gilded Age,” Mr. Blumenthal said.
  • ...2 more annotations...
  • Republicans have also called for reforms to the legal shield protecting platforms for third-party speech, known as Section 230 of the Communications Decency Act.
  • “I’m very worried about this, especially any misinformation that could incite violence in such a volatile period like this,” Mr. Zuckerberg said.
Javier E

Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show - WSJ - 0 views

  • “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the researchers said in a March 2020 slide presentation posted to Facebook’s internal message board, reviewed by The Wall Street Journal. “Comparisons on Instagram can change how young women view and describe themselves.”
  • For the past three years, Facebook has been conducting studies into how its photo-sharing app affects its millions of young users. Repeatedly, the company’s researchers found that Instagram is harmful for a sizable percentage of them, most notably teenage girls.
  • “We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues.
  • ...8 more annotations...
  • Expanding its base of young users is vital to the company’s more than $100 billion in annual revenue, and it doesn’t want to jeopardize their engagement with the platform.
  • Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram, one presentation showed.
  • “Teens blame Instagram for increases in the rate of anxiety and depression,” said another slide. “This reaction was unprompted and consistent across all groups.”
  • More than 40% of Instagram’s users are 22 years old and younger, and about 22 million teens log onto Instagram in the U.S. each day, compared with five million teens logging onto Facebook, where young users have been shrinking for a decade, the materials show.
  • In public, Facebook has consistently played down the app’s negative effects on teens, and hasn’t made its research public or available to academics or lawmakers who have asked for it.
  • “The research that we’ve seen is that using social apps to connect with other people can have positive mental-health benefits,” CEO Mark Zuckerberg said at a congressional hearing in March 2021 when asked about children and mental health. In May, Instagram head Adam Mosseri told reporters that research he had seen suggests the app’s effects on teen well-being is likely “quite small.”
  • He said he believes Facebook was late to realizing there were drawbacks to connecting people in such large numbers. “I’ve been pushing very hard for us to embrace our responsibilities more broadly,” he said. He said the research into the mental-health effects on teens was valuable, and that Facebook employees ask tough questions about the platform. “For me, this isn’t dirty laundry. I’m actually very proud of this research,” he said.
  • What Facebook knows The Instagram documents form part of a trove of internal communications reviewed by the Journal, on areas including teen mental health, political discourse and human trafficking. They offer an unparalleled picture of how Facebook is acutely aware that the products and systems central to its business success routinely fail. The documents also show that Facebook has made minimal efforts to address these issues and plays them down in public.
Javier E

Facebook Whistleblower's Testimony Builds Momentum for Tougher Tech Laws - WSJ - 0 views

  • “I saw Facebook repeatedly encounter conflicts between its own profit and our safety. Facebook consistently resolved these conflicts in favor of its own profits,” Ms. Haugen told a Senate consumer protection subcommittee. “As long as Facebook is operating in the shadows, hiding its research from public scrutiny, it is unaccountable. Until the incentives change, Facebook will not change.”
  • “There is no one currently holding Mark accountable but himself,” she said. Facebook under Mr. Zuckerberg makes decisions based on how they will affect measurements of user engagement, rather than their potential downsides for the public, she said.
  • “Mark has built an organization that is very metrics-driven,” she said. “The metrics make the decision. Unfortunately that itself is a decision.”
  • ...13 more annotations...
  • Sen. Richard Blumenthal (D., Conn.), the chairman of the subcommittee conducting Tuesday’s hearing, called on Mr. Zuckerberg to appear before Congress to testify, terming the company “morally bankrupt.”
  • Facebook has said it plans to continue doing internal research and is working on ways to make that work available to others. The company has recently battled with some academic researchers over access to its data, but Facebook says that it works cooperatively with many others.
  • Republican and Democratic lawmakers at the hearing renewed their calls for regulation, such as strengthening privacy and competition laws and special online protections for children, as well as toughening of the platforms’ accountability. One idea that got a particular boost was requiring more visibility into social-media data as well as the algorithms that shape users’ experiences.
  • “The severity of this crisis demands that we break out of previous regulatory frames,” she said. “Tweaks to outdated privacy protections…will not be sufficient.”
  • A good starting point, she added, would be “full access to data for research not directed by Facebook. On this foundation, we can build sensible rules and standards to address consumer harms, illegal content, data protection, anticompetitive practices, algorithmic systems and more.”
  • Ms. Haugen also raised national-security concerns about Facebook, citing foreign surveillance on the platform—for example, Chinese monitoring of Uyghur populations—and what she termed Facebook’s “consistent understaffing” of its counterintelligence teams.
  • Ms. Haugen made the case for policy changes to address her perceived concerns. In products such as cars and cigarettes, she said, independent researchers can evaluate health effects, but “the public cannot do the same with Facebook.”
  • “This inability to see in Facebook’s actual systems and confirm that they work as communicated is like the Department of Transportation regulating cars by only watching them drive down the highway,” she said, arguing for an independent government agency that would employ experts to audit the impact of social media.
  • She said that if Congress moves to change Section 230, a federal accountability law that protects Facebook and other companies from liability for user-generated content, it should distinguish between that kind of content and choices that companies make about what type of content to promote.
  • “Facebook should not get a pass on choices it makes to prioritize virality and growth and reactiveness over public safety,” she said.
  • Ms. Haugen was hired by Facebook two years ago to help protect against election interference on Facebook. She said she acted because she was frustrated by what she viewed as Facebook’s lack of openness about the platform’s potential for harm and its unwillingness to address its flaws.
  • “I would simply say, let’s get to work,” said Sen. John Thune (R., S.D.), who has sponsored several measures on algorithm transparency. “We’ve got some things we can do here.”
  • “There’s always reason for skepticism” about Congress reaching consensus on legislation, Mr. Blumenthal said after the hearing. But he added that “there are times when the dynamic is so powerful that something actually is done…I have rarely, if ever, seen the kind of unanimity on display today.”
Javier E

Why Facebook Became Meta - The Atlantic - 0 views

  • There are at least three driving forces motivating Facebook and Co. to pursue the metaverse, and pursue it to the extent that one of our largest tech giants is willing to rename itself in its honor: Public-relations strategy, founder ego, and a growing, industry-wide business imperative.
  • The metaverse is likely propelled as much by the founder’s ego as it is by PR stuntery. Behind the opportunism is Zuckerberg’s desire to take a billionaire-size step into the unknown, à la Jeff Bezos or Elon Musk, something that can truly make a dent in the future, rather than running an ad-stuffed social-media feed that is no longer anyone’s idea of a bold new tomorrow.
  • Becoming a hero in the metaverse feeds Zuck’s ambitions the way aspiring to space travel feeds Bezos and Musk.
  • ...5 more annotations...
  • The truth is that all of Silicon Valley, not just Facebook, is in desperate want of a big new idea.
  • We may always feel like we’re on our phones too much, that we’re already devoting a surfeit of time to our screens, but the truth is we have much more time to give our platforms. If we had screens over our eyes, we could be captive consumers of content and advertising quite literally all the time. Not only that, but if the metaverse went mainstream, it would necessitate a whole swath of new hardware and profit-generating apps too.
  • The industry needs this framework—at a moment of “unprecedented liquidity for VC funds,” as the investor Matt Cohen put it at Crunchbase, investors are dying for something like a metaverse to pour capital into.
  • Allowing this company—this industry—to rush headlong into building anything remotely metaverse-like would merely reproduce, if not exacerbate, the problems that arose when it hastily launched the social-media platforms that now define online life.
  • with Facebook desperately trying to change the terms of the game, Zuckerberg looking to assert himself as more than just the operator of a particularly toxic yearbook feed, and the conditions ripe for the industry to pour cash into the pieces necessary to build some metaverse-shaped thing, they may just wind up succeeding—and replicating outright the dystopian metaverse their source material has warned us about.
Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
lilyrashkind

Facebook parent Meta COO Sheryl Sandberg is stepping down - 0 views

  • Sheryl Sandberg is stepping down from her role as Chief Operating Officer at Meta, the company formerly known as Facebook.Sandberg joined Facebook in early 2008 as the No. 2 to Facebook CEO and co-founder Mark Zuckerberg, and helped turn Facebook into an advertising juggernaut and one of the most powerful companies in the tech industry, with a market cap that topped $1 trillion at one point.
  • Meta has come under fire in recent years for its massive influence, its lack of success in stopping the spread of misinformation and harmful material, and its acquisitions of one-time rivals like Instagram and WhatsApp. Zuckerberg and other execs have been forced to testify before Congress multiple times in the last three years, although Sandberg has largely escaped that spotlight. The company currently faces an antitrust lawsuit from the Federal Trade Commission and could see scrutiny from other agencies like the Securities and Exchange Commission after a whistleblower filed a complaint about its efforts to combat hate on its platform.
  • In 2013, she released the book “Lean In: Women, Work, and the Will to Lead,” focusing on the challenges women face in the workplace and what they can do to advance their careers.In 2015, she was faced with the unexpected death of her husband Dave Goldberg, who suffered cardiac arrhythmia and collapsed on a treadmill. Sandberg has spoken at length about dealing with the grief of Goldberg’s passing, and in 2017, she released a book titled “Option B” centered around the topic.
Javier E

The 'Black Hole' That Sucks Up Silicon Valley's Money - The Atlantic - 0 views

  • That’s not to say that Silicon Valley’s wealthy aren’t donating their money to charity. Many, including Mark Zuckerberg, Elon Musk, and Larry Page, have signed the Giving Pledge, committing to dedicating the majority of their wealth to philanthropic causes. But much of that money is not making its way out into the community.
  • The San Francisco Bay Area has rapidly become the richest region in the country—the Census Bureau said last year that median household income was $96,777. It’s a place where $100,000 Teslas are commonplace, “raw water” goes for $37 a jug, and injecting clients with the plasma of youth —a gag on the television show Silicon Valley—is being tried by real companies for just $8,000 a pop.
  • There are many reasons for this, but one of them is likely the increasing popularity of a certain type of charitable account called a donor-advised fund. These funds allow donors to receive big tax breaks for giving money or stock, but have little transparency and no requirement that money put into them is actually spent.
  • ...23 more annotations...
  • Donor-advised funds are categorized by law as public charities, rather than private foundations, so they have no payout requirements and few disclosure requirements.
  • And wealthy residents of Silicon Valley are donating large sums to such funds
  • critics say that in part because of its structure as a warehouse of donor-advised funds, the Silicon Valley Community Foundation has not had a positive impact on the community it is meant to serve. Some people I talked to say the foundation has had little interest in spending money, because its chief executive, Emmett Carson, who was placed on paid administrative leave after the Chronicle’s report, wanted it to be known that he had created one of the biggest foundations in the country. Carson was “aggressive” about trying to raise money, but “unaggressive about suggesting what clients would do with it,”
  • “Most of us in the local area have seen our support from the foundation go down and not up,” he said.
  • The amount of money going from the Silicon Valley Community Foundation to the nine-county Bay Area actually dropped in 2017 by 46 percent, even as the amount of money under management grew by 64 percent, to $13.5 billion
  • “They got so drunk on the idea of growth that they lost track of anything smacking of mission,” he said. It did not help perceptions that the foundation opened offices in New York and San Francisco at the same time local organizations were seeing donations drop.
  • The foundation now gives her organization some grants, but they don’t come from the donor-advised funds, she told me. “I haven’t really cracked the code of how to access those donor-advised funds,” she said. Her organization had been getting between $50,000 and $100,000 a year from United Way that it no longer gets, she said,
  • Rob Reich, the co-director of the Stanford Center on Philanthropy and Civil Society, set up a donor-advised fund at the Silicon Valley Community Foundation as an experiment. He spent $5,000—the minimum amount accepted—and waited. He received almost no communication from the foundation, he told me. No emails or calls about potential nonprofits to give to, no information about whether the staff was out looking for good opportunities in the community, no data about how his money was being managed.
  • One year later, despite a booming stock market, his account was worth less than the $5,000 he had put in, and had not been used in any way in the community. His balance was lower because the foundation charges hefty fees to donors who keep their money there. “I was flabbergasted,” he told me. “I didn’t understand what I, as a donor, was getting for my fees.”
  • Though donors receive a big tax break for donating to donor-advised funds, the funds have no payout requirements, unlike private foundations, which are required to disperse 5 percent of their assets each year. With donor-advised funds, “there’s no urgency and no forced payout,”
  • he had met wealthy individuals who said they were setting up donor-advised funds so that their children could disperse the funds and learn about philanthropy—they had no intent to spend the money in their own lifetimes.
  • Fund managers also receive fees for the amount of money they have under management, which means they have little incentive to encourage people to spend the money in their accounts,
  • Transparency is also an issue. While foundations have to provide detailed information about where they give their money, donor-advised funds distributions are listed as gifts made from the entire charitable fund—like the Silicon Valley Community Foundation—rather than from individuals.
  • Donor-advised funds can also be set up anonymously, which makes it hard for nonprofits to engage with potential givers. They also don’t have websites or mission statements like private foundations do, which can make it hard for nonprofits to know what causes donors support.
  • Public charities—defined as organizations that receive a significant amount of their revenue from small donations—were saddled with less oversight, in part because Congress figured that their large number of donors would make sure they were spending their money well, Madoff said. But an attorney named Norman Sugarman, who represented the Jewish Community Federation of Cleveland, convinced the IRS to categorize a certain type of asset—charitable dollars placed in individually named accounts managed by a public charity—as donations to public, not private, foundations.
  • Donor-advised funds have been growing nationally as the amount of money made by the top 1 percent has grown: Contributions to donor-advised funds grew 15.1 percent in fiscal year 2016, according to The Chronicle of Philanthropy, while overall charitable contributions grew only 1.4 percent that year
  • Six of the top 10 philanthropies in the country last year, in terms of the amount of nongovernmental money raised, were donor-advised funds,
  • In addition, those funds with high payout rates could just be giving to another donor-advised fund, rather than to a public charity, Madoff says. One-quarter of donor-advised fund sponsors distribute less than 1 percent of their assets in a year,
  • Groups that administer donor-advised funds defend their payout rate, saying distributions from donor-advised funds are around 14 percent of assets a year. But that number can be misleading, because one donor-advised fund could give out all its money, while many more could give out none, skewing the data.
  • Donor-advised funds are especially popular in places like Silicon Valley because they provide tax advantages for donating appreciated stock, which many start-up founders have but don’t necessarily want to pay huge taxes on
  • Donors get a tax break for the value of the appreciated stock at the time they donate it, which can also spare them hefty capital-gains taxes. “Anybody with a business interest can give their business interest before it goes public and save huge amounts of taxes,”
  • Often, people give to donor-advised funds right before a public event like an initial public offering, so they can avoid the capital-gains taxes they’d otherwise have to pay, and instead receive a tax deduction. Mark Zuckerberg and Priscilla Chan gave $500 million in stock to the foundation in 2012, when Facebook held its initial public offering, and also donated $1 billion in stock in 2013
  • Wealthy donors can also donate real estate and deduct the value of real estate at the time of the donation—if they’d given to a private foundation, they’d only be able to deduct the donor’s basis value (typically the purchase price) of the real estate at the time they acquired it. The difference can be a huge amount of money in the hot market of California.
« First ‹ Previous 41 - 60 of 108 Next › Last »
Showing 20 items per page