Skip to main content

Home/ History Readings/ Group items tagged employee

Rss Feed Group items tagged

Javier E

Inside Amazon: Wrestling Big Ideas in a Bruising Workplace - The New York Times - 0 views

  • At Amazon, workers are encouraged to tear apart one another’s ideas in meetings, toil long and late (emails arrive past midnight, followed by text messages asking why they were not answered), and held to standards that the company boasts are “unreasonably high.” The internal phone directory instructs colleagues on how to send secret feedback to one another’s bosses. Employees say it is frequently used to sabotage others. (The tool offers sample texts, including this: “I felt concerned about his inflexibility and openly complaining about minor tasks.”)
  • The company’s winners dream up innovations that they roll out to a quarter-billion customers and accrue small fortunes in soaring stock. Losers leave or are fired in annual cullings of the staff — “purposeful Darwinism,”
  • his enduring image was watching people weep in the office, a sight other workers described as well. “You walk out of a conference room and you’ll see a grown man covering his face,” he said. “Nearly every person I worked with, I saw cry at their desk.”
  • ...36 more annotations...
  • Last month, it eclipsed Walmart as the most valuable retailer in the country, with a market valuation of $250 billion, and Forbes deemed Mr. Bezos the fifth-wealthiest person on earth.
  • Others who cycled in and out of the company said that what they learned in their brief stints helped their careers take off. And more than a few who fled said they later realized they had become addicted to Amazon’s way of working.
  • Amazon may be singular but perhaps not quite as peculiar as it claims. It has just been quicker in responding to changes that the rest of the work world is now experiencing: data that allows individual performance to be measured continuously, come-and-go relationships between employers and employees, and global competition in which empires rise and fall overnight. Amazon is in the vanguard of where technology wants to take the modern office: more nimble and more productive, but harsher and less forgiving.
  • “Organizations are turning up the dial, pushing their teams to do more for less money, either to keep up with the competition or just stay ahead of the executioner’s blade,”
  • At its best, some employees said, Amazon can feel like the Bezos vision come to life, a place willing to embrace risk and strengthen ideas by stress test. Employees often say their co-workers are the sharpest, most committed colleagues they have ever met, taking to heart instructions in the leadership principles like “never settle” and “no task is beneath them.”
  • In contrast to companies where declarations about their philosophy amount to vague platitudes, Amazon has rules that are part of its daily language and rituals, used in hiring, cited at meetings and quoted in food-truck lines at lunchtime
  • “You can work long, hard or smart, but at Amazon.com you can’t choose two out of three,” Mr. Bezos wrote in his 1997 letter to shareholders
  • mazon, though, offers no pretense that catering to employees is a priority. Compensation
  • As the company has grown, Mr. Bezos has become more committed to his original ideas, viewing them in almost moral terms, those who have worked closely with him say. “My main job today: I work hard at helping to maintain the culture,”
  • perhaps the most distinctive is his belief that harmony is often overvalued in the workplace — that it can stifle honest critique and encourage polite praise for flawed ideas. Instead, Amazonians are instructed to “disagree and commit” (
  • According to early executives and employees, Mr. Bezos was determined almost from the moment he founded Amazon in 1994 to resist the forces he thought sapped businesses over time — bureaucracy, profligate spending, lack of rigor. As the company grew, he wanted to codify his ideas about the workplace, some of them proudly counterintuitive, into instructions simple enough for a new worker to understand, general enough to apply to the nearly limitless number of businesses he wanted to enter and stringent enough to stave off the mediocrity he feared.
  • Company veterans often say the genius of Amazon is the way it drives them to drive themselves. “If you’re a good Amazonian, you become an Amabot,” said one employee, using a term that means you have become at one with the system.
  • But in its offices, Amazon uses a self-reinforcing set of management, data and psychological tools to spur its tens of thousands of white-collar employees to do more and more. “The company is running a continual performance improvement algorithm on its staff,” said Amy Michaels, a former Kindle marketer.
  • As the newcomers acclimate, they often feel dazzled, flattered and intimidated by how much responsibility the company puts on their shoulders and how directly Amazon links their performance to the success of their assigned projects
  • Every aspect of the Amazon system amplifies the others to motivate and discipline the company’s marketers, engineers and finance specialists: the leadership principles; rigorous, continuing feedback on performance; and the competition among peers who fear missing a potential problem or improvement and race to answer an email before anyone else.
  • many others said the culture stoked their willingness to erode work-life boundaries, castigate themselves for shortcomings (being “vocally self-critical” is included in the description of the leadership principles) and try to impress a company that can often feel like an insatiable taskmaster.
  • “One time I didn’t sleep for four days straight,” said Dina Vaccari, who joined in 2008 to sell Amazon gift cards to other companies and once used her own money, without asking for approval, to pay a freelancer in India to enter data so she could get more done. “These businesses were my babies, and I did whatever I could to make them successful.”
  • To prod employees, Amazon has a powerful lever: more data than any retail operation in history. Its perpetual flow of real-time, ultradetailed metrics allows the company to measure nearly everything its customers do:
  • Amazon employees are held accountable for a staggering array of metrics, a process that unfolds in what can be anxiety-provoking sessions called business reviews, held weekly or monthly among various teams. A day or two before the meetings, employees receive printouts, sometimes up to 50 or 60 pages long, several workers said. At the reviews, employees are cold-called and pop-quizzed on any one of those thousands of numbers.
  • Ms. Willet’s co-workers strafed her through the Anytime Feedback Tool, the widget in the company directory that allows employees to send praise or criticism about colleagues to management. (While bosses know who sends the comments, their identities are not typically shared with the subjects of the remarks.) Because team members are ranked, and those at the bottom eliminated every year, it is in everyone’s interest to outperform everyone else.
  • many workers called it a river of intrigue and scheming. They described making quiet pacts with colleagues to bury the same person at once, or to praise one another lavishly. Many others, along with Ms. Willet, described feeling sabotaged by negative comments from unidentified colleagues with whom they could not argue
  • The rivalries at Amazon extend beyond behind-the-back comments. Employees say that the Bezos ideal, a meritocracy in which people and ideas compete and the best win, where co-workers challenge one another “even when doing so is uncomfortable or exhausting,” as the leadership principles note, has turned into a world of frequent combat
  • Resources are sometimes hoarded. That includes promising job candidates, who are especially precious at a company with a high number of open positions. To get new team members, one veteran said, sometimes “you drown someone in the deep end of the pool,” then take his or her subordinates. Ideas are critiqued so harshly in meetings at times that some workers fear speaking up.
  • David Loftesness, a senior developer, said he admired the customer focus but could not tolerate the hostile language used in many meetings, a comment echoed by many others.
  • Each year, the internal competition culminates at an extended semi-open tournament called an Organization Level Review, where managers debate subordinates’ rankings, assigning and reassigning names to boxes in a matrix projected on the wall. In recent years, other large companies, including Microsoft, General Electric and Accenture Consulting, have dropped the practice — often called stack ranking, or “rank and yank” — in part because it can force managers to get rid of valuable talent just to meet quotas.
  • Molly Jay, an early member of the Kindle team, said she received high ratings for years. But when she began traveling to care for her father, who was suffering from cancer, and cut back working on nights and weekends, her status changed. She was blocked from transferring to a less pressure-filled job, she said, and her boss told her she was “a problem.” As her father was dying, she took unpaid leave to care for him and never returned to Amazon.
  • “When you’re not able to give your absolute all, 80 hours a week, they see it as a major weakness,” she said.
  • A woman who had thyroid cancer was given a low performance rating after she returned from treatment. She says her manager explained that while she was out, her peers were accomplishing a great deal. Another employee who miscarried twins left for a business trip the day after she had surgery. “I’m sorry, the work is still going to need to get done,” she said her boss told her. “From where you are in life, trying to start a family, I don’t know if this is the right place for you.”
  • A woman who had breast cancer was told that she was put on a “performance improvement plan” — Amazon code for “you’re in danger of being fired” — because “difficulties” in her “personal life” had interfered with fulfilling her work goals. Their accounts echoed others from workers who had suffered health crises and felt they had also been judged harshly instead of being given time to recover.
  • Amazon retains new workers in part by requiring them to repay a part of their signing bonus if they leave within a year, and a portion of their hefty relocation fees if they leave within two years.
  • In interviews, 40-year-old men were convinced Amazon would replace them with 30-year-olds who could put in more hours, and 30-year-olds were sure that the company preferred to hire 20-somethings who would outwork them. A
  • A 2013 survey by PayScale, a salary analysis firm, put the median employee tenure at one year, among the briefest in the Fortune 500
  • Recruiters, though, also say that other businesses are sometimes cautious about bringing in Amazon workers, because they have been trained to be so combative. The derisive local nickname for Amazon employees is “Amholes” — pugnacious and work-obsessed.
  • By the time the dust settles in three years, Amazon will have enough space for 50,000 employees or so, more than triple what it had as recently as 2013.
  • just as Jeff Bezos was able to see the future of e-commerce before anyone else, she added, he was able to envision a new kind of workplace: fluid but tough, with employees staying only a short time and employers demanding the maximum.
  • “Amazon is driven by data,” said Ms. Pearce, who now runs her own Seattle software company, which is well stocked with ex-Amazonians. “It will only change if the data says it must — when the entire way of hiring and working and firing stops making economic sense.”
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Facebook Papers: 'History Will Not Judge Us Kindly' - The Atlantic - 0 views

  • Facebook’s hypocrisies, and its hunger for power and market domination, are not secret. Nor is the company’s conflation of free speech and algorithmic amplification
  • But the events of January 6 proved for many people—including many in Facebook’s workforce—to be a breaking point.
  • these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.
  • ...59 more annotations...
  • Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more.
  • And again and again, staffers say, Facebook’s leaders ignore them.
  • Facebook has dismissed the concerns of its employees in manifold ways.
  • One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me. This stance allows Facebook to claim transparency while ignoring the substance of the complaints, and the implication of the complaints: that many of Facebook’s employees believe their company operates without a moral compass.
  • When you stitch together the stories that spanned the period between Joe Biden’s election and his inauguration, it’s easy to see Facebook as instrumental to the attack on January 6. (A spokesperson told me that the notion that Facebook played an instrumental role in the insurrection is “absurd.”)
  • what emerges from a close reading of Facebook documents, and observation of the manner in which the company connects large groups of people quickly, is that Facebook isn’t a passive tool but a catalyst. Had the organizers tried to plan the rally using other technologies of earlier eras, such as telephones, they would have had to identify and reach out individually to each prospective participant, then persuade them to travel to Washington. Facebook made people’s efforts at coordination highly visible on a global scale.
  • The platform not only helped them recruit participants but offered people a sense of strength in numbers. Facebook proved to be the perfect hype machine for the coup-inclined.
  • In November 2019, Facebook staffers noticed they had a serious problem. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.
  • In July 2020, researchers presented the findings of a series of experiments. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
  • By dialing anger’s weight back to zero in the algorithm, the researchers found, they could keep posts to which people reacted angrily from being viewed by as many users. That, in turn, translated to a significant (up to 5 percent) reduction in the hate speech, civic misinformation, bullying, and violent posts—all of which are correlated with offline violence—to which users were exposed.
  • Facebook rolled out the change in early September 2020, documents show; a Facebook spokesperson confirmed that the change has remained in effect. It was a real victory for employees of the Integrity team.
  • But it doesn’t normally work out that way. In April 2020, according to Frances Haugen’s filings with the SEC, Facebook employees had recommended tweaking the algorithm so that the News Feed would deprioritize the surfacing of content for people based on their Facebook friends’ behavior. The idea was that a person’s News Feed should be shaped more by people and groups that a person had chosen to follow. Up until that point, if your Facebook friend saw a conspiracy theory and reacted to it, Facebook’s algorithm might show it to you, too. The algorithm treated any engagement in your network as a signal that something was worth sharing. But now Facebook workers wanted to build circuit breakers to slow this form of sharing.
  • Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg “rejected this intervention that could have reduced the risk of violence in the 2020 election,” Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.
  • This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform
  • Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.
  • “I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”
  • It is quite a thing to see, the sheer number of Facebook employees—people who presumably understand their company as well as or better than outside observers—who believe their employer to be morally bankrupt.
  • I spoke with several former Facebook employees who described the company’s metrics-driven culture as extreme, even by Silicon Valley standards
  • Facebook workers are under tremendous pressure to quantitatively demonstrate their individual contributions to the company’s growth goals, they told me. New products and features aren’t approved unless the staffers pitching them demonstrate how they will drive engagement.
  • e worries have been exacerbated lately by fears about a decline in new posts on Facebook, two former employees who left the company in recent years told me. People are posting new material less frequently to Facebook, and its users are on average older than those of other social platforms.
  • One of Facebook’s Integrity staffers wrote at length about this dynamic in a goodbye note to colleagues in August 2020, describing how risks to Facebook users “fester” because of the “asymmetrical” burden placed on employees to “demonstrate legitimacy and user value” before launching any harm-mitigation tactics—a burden not shared by those developing new features or algorithm changes with growth and engagement in mind
  • The note said:We were willing to act only after things had spiraled into a dire state … Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB. It has been painful to observe.
  • Current and former Facebook employees describe the same fundamentally broken culture—one in which effective tactics for making Facebook safer are rolled back by leadership or never approved in the first place.
  • That broken culture has produced a broken platform: an algorithmic ecosystem in which users are pushed toward ever more extreme content, and where Facebook knowingly exposes its users to conspiracy theories, disinformation, and incitement to violence.
  • One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences. Under the program, internal documents show, millions of high-profile users—including politicians—are left alone by Facebook even when they incite violence
  • whitelisting influential users with massive followings on Facebook isn’t just a secret and uneven application of Facebook’s rules; it amounts to “protecting content that is especially likely to deceive, and hence to harm, people on our platforms.”
  • Facebook workers tried and failed to end the program. Only when its existence was reported in September by The Wall Street Journal did Facebook’s Oversight Board ask leadership for more information about the practice. Last week, the board publicly rebuked Facebook for not being “fully forthcoming” about the program.
  • As a result, Facebook has stoked an algorithm arms race within its ranks, pitting core product-and-engineering teams, such as the News Feed team, against their colleagues on Integrity teams, who are tasked with mitigating harm on the platform. These teams establish goals that are often in direct conflict with each other.
  • “We can’t pretend we don’t see information consumption patterns, and how deeply problematic they are for the longevity of democratic discourse,” a user-experience researcher wrote in an internal comment thread in 2019, in response to a now-infamous memo from Andrew “Boz” Bosworth, a longtime Facebook executive. “There is no neutral position at this stage, it would be powerfully immoral to commit to amorality.”
  • Zuckerberg has defined Facebook’s mission as making “social infrastructure to give people the power to build a global community that works for all of us,” but in internal research documents his employees point out that communities aren’t always good for society:
  • When part of a community, individuals typically act in a prosocial manner. They conform, they forge alliances, they cooperate, they organize, they display loyalty, they expect obedience, they share information, they influence others, and so on. Being in a group changes their behavior, their abilities, and, importantly, their capability to harm themselves or others
  • Thus, when people come together and form communities around harmful topics or identities, the potential for harm can be greater.
  • The infrastructure choices that Facebook is making to keep its platform relevant are driving down the quality of the site, and exposing its users to more dangers
  • hose dangers are also unevenly distributed, because of the manner in which certain subpopulations are algorithmically ushered toward like-minded groups
  • And the subpopulations of Facebook users who are most exposed to dangerous content are also most likely to be in groups where it won’t get reported.
  • And it knows that 3 percent of Facebook users in the United States are super-consumers of conspiracy theories, accounting for 37 percent of known consumption of misinformation on the platform.
  • Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.
  • Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice. Facebook has a sophisticated understanding of measures it could take to make its platform safer without resorting to broad or ideologically driven censorship tactics.
  • Facebook knows that no two people see the same version of the platform, and that certain subpopulations experience far more dangerous versions than others do
  • Facebook knows that people who are isolated—recently widowed or divorced, say, or geographically distant from loved ones—are disproportionately at risk of being exposed to harmful content on the platform.
  • It knows that repeat offenders are disproportionately responsible for spreading misinformation.
  • All of this makes the platform rely more heavily on ways it can manipulate what its users see in order to reach its goals. This explains why Facebook is so dependent on the infrastructure of groups, as well as making reshares highly visible, to keep people hooked.
  • It could consistently enforce its policies regardless of a user’s political power.
  • Facebook could ban reshares.
  • It could choose to optimize its platform for safety and quality rather than for growth.
  • It could tweak its algorithm to prevent widespread distribution of harmful content.
  • Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time.
  • It could make public its rules for how frequently groups can post and how quickly they can grow.
  • It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.
  • Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right
  • You must be vigilant about the informational streams you swim in, deliberate about how you spend your precious attention, unforgiving of those who weaponize your emotions and cognition for their own profit, and deeply untrusting of any scenario in which you’re surrounded by a mob of people who agree with everything you’re saying.
  • It could do all of these things. But it doesn’t.
  • Lately, people have been debating just how nefarious Facebook really is. One argument goes something like this: Facebook’s algorithms aren’t magic, its ad targeting isn’t even that good, and most people aren’t that stupid.
  • All of this may be true, but that shouldn’t be reassuring. An algorithm may just be a big dumb means to an end, a clunky way of maneuvering a massive, dynamic network toward a desired outcome. But Facebook’s enormous size gives it tremendous, unstable power.
  • Facebook takes whole populations of people, pushes them toward radicalism, and then steers the radicalized toward one another.
  • When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.
  • The lesson for individuals is this:
  • Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content
  • Without seeing how Facebook works at a finer resolution, in real time, we won’t be able to understand how to make the social web compatible with democracy.
Javier E

Opinion | Artificial Intelligence Requires Specific Safety Rules - The New York Times - 0 views

  • For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. Current and former OpenAI staffers were paranoid about talking to the press. In May, one departing employee refused to sign and went public in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began criticizing OpenAI’s safety practices, and a wave of articles emerged about its broken promises.
  • These stories came from people who were willing to risk their careers to inform the public. How many more are silenced because they’re too scared to speak out? Since existing whistle-blower protections typically cover only the reporting of illegal conduct, they are inadequate here. Artificial intelligence can be dangerous without being illegal
  • A.I. needs stronger protections — like those in place in parts of the public sector, finance and publicly traded companies — that prohibit retaliation and establish anonymous reporting channels.
  • ...19 more annotations...
  • The company’s chief executive was briefly fired after the nonprofit board lost trust in him.
  • OpenAI has spent the last year mired in scandal
  • Whistle-blowers alleged to the Securities and Exchange Commission that OpenAI’s nondisclosure agreements were illegal.
  • Safety researchers have left the company in droves
  • Now the firm is restructuring its core business as a for-profit, seemingly prompting the departure of more key leaders
  • On Friday, The Wall Street Journal reported that OpenAI rushed testing of a major model in May, attempting to undercut a rival’s publicity; after the release, employees found out the model exceeded the company’s standards for safety. (The company told The Journal the findings were the result of a methodological flaw.)
  • This behavior would be concerning in any industry, but according to OpenAI itself, A.I. poses unique risks. The leaders of the top A.I. firms and leading A.I. researchers have warned that the technology could lead to human extinction.
  • Since more comprehensive national A.I. regulations aren’t coming anytime soon, we need a narrow federal law allowing employees to disclose information to Congress if they reasonably believe that an A.I. model poses a significant safety risk
  • But McKinsey did not hold the majority of employees’ compensation hostage in exchange for signing lifetime nondisparagement agreements, as OpenAI did.
  • People reporting violations of the Atomic Energy Act have more robust whistle-blower protections than those in most fields, while those working in biological toxins for several government departments are protected by proactive, pro-reporting guidance. A.I. workers need similar rules.
  • Many companies maintain a culture of secrecy beyond what is healthy. I once worked at the consulting firm McKinsey on a team that advised Immigration and Customs Enforcement on implementing Donald Trump’s inhumane immigration policies. I was fearful of going public
  • Congress should establish a special inspector general to serve as a point of contact for these whistle-blowers. The law should mandate companies to notify staff about the channels available to them, which they can use without facing retaliation.
  • Earlier this month, OpenAI released a highly advanced new model. For the first time, experts concluded the model could aid in the construction of a bioweapon more effectively than internet research alone could. A third party hired by the company found that the new system demonstrated evidence of “power seeking” and “the basic capabilities needed to do simple in-context scheming
  • penAI decided to publish these results, but the company still chooses what information to share. It is possible the published information paints an incomplete picture of the model’s risks.
  • The A.I. safety researcher Todor Markov — who recently left OpenAI after nearly six years with the firm — suggested one hypothetical scenario. An A.I. company promises to test its models for dangerous capabilities, then cherry-picks results to make the model look safe. A concerned employee wants to notify someone, but doesn’t know who — and can’t point to a specific law being broken. The new model is released, and a terrorist uses it to construct a novel bioweapon. Multiple former OpenAI employees told me this scenario is plausible.
  • The United States’ current arrangement of managing A.I. risks through voluntary commitments places enormous trust in the companies developing this potentially dangerous technology. Unfortunately, the industry in general — and OpenAI in particular — has shown itself to be unworthy of that trust, time and again.
  • The fate of the first attempt to protect A.I. whistle-blowers rests with Governor Gavin Newsom of California. Mr. Newsom has hinted that he will veto a first-of-its-kind A.I. safety bill, called S.B. 1047, which mandates that the largest A.I. companies implement safeguards to prevent catastrophes, features whistle-blower protections, a rare point of agreement between the bill’s supporters and its critics
  • if those legislators are serious in their support for these protections, they should introduce a federal A.I. whistle-blower protection bill. They are well positioned to do so: The letter’s organizer, Representative Zoe Lofgren, is the ranking Democrat on the House Committee on Science, Space and Technology.
  • Last month, a group of leading A.I. experts warned that as the technology rapidly progresses, “we face growing risks that A.I. could be misused to attack critical infrastructure, develop dangerous weapons or cause other forms of catastrophic harm.” These risks aren’t necessarily criminal, but they are real — and they could prove deadly. If that happens, employees at OpenAI and other companies will be the first to know. But will they tell us?
Javier E

At Kimberly-Clark, 'Dead Wood' Workers Have Nowhere to Hide - WSJ - 0 views

  • One of the company’s goals now is “managing out dead wood,” aided by performance-management software that helps track and evaluate salaried workers’ progress and quickly expose laggards. Turnover is now about twice as high it was a decade ago, with approximately 10% of U.S. employees leaving annually, voluntarily or not, the company said.
  • Armed with personalized goals for employees and large quantities of data, Kimberly-Clark said it expects employees to keep improving—or else. “People can’t duck and hide in the same way they could in the past,” said Mr. Boston, who oversees talent management globally for the firm.
  • Coca-Cola Co. KO -0.41 % in June approved pushing its new performance-management process from the pilot stage to a global rollout. The new system encourages managers to conduct a monthly “reflection” on every direct report, answering five questions that include “Given his/her performance, would you assign this associate to increased scale, scope, and responsibilities?” and “Is this associate at risk for low performance?”
  • ...14 more annotations...
  • The changes mirror what is happening inside many large companies, where “performance management” reflects the conviction that a sharpened focus on creating a high-performing workforce is a vital tool to generate revenue and profit.
  • Performance management shifts companies away from backward-looking, once-a-year reviews framed largely as compliance requirements—a paper trail for potential job cuts and salary decisions—to a process that is real-time, continuous and focused on helping people meet ambitious goals, or move out of the company faster.
  • The last recession led many employers to rethink the nearly automatic merit raises they had been doling out, forcing them to do a better job identifying high and low performers when giving raises and bonuses. Millennial workers, meanwhile, demand more feedback, more coaching and a stronger sense of their career path.
  • systems let managers track workers’ progress via dashboards that display their goals, accomplishments, attendance, peer feedback and other data.
  • Executives’ use of phrases like “performance culture” in conference calls with analysts and investors has doubled in the past five years, according to a review of transcripts in the Factiva news database. Firms that set goals and hold workers accountable “clearly outperform,” said Nicholas Bloom, an economist at Stanford University and co-author of a recent paper that used Census data to examine more than 32,000 U.S. manufacturing plants. He said they have faster growth, higher profitability and are less likely to go bankrupt.
  • Some academics say constant monitoring can feel intrusive and threatening to workers, especially those who value stability. But human-resources experts largely agree that the traditional review process is a waste of time and needs an overhaul.
  • Remaining employees are expected to work “smarter” and meet regularly raised targets. “We have to routinely shuffle the resources and say, what’s the most important thing we need to do today, this week, this month, to drive this objective?”
  • Using the Workday tool, Kimberly-Clark’s salaried employees set goals and report their progress, record accomplishments or mistakes, and solicit and send feedback
  • The system collects and archives feedback, which can be seen by employees’ managers. It also holds data on staffers’ strengths and development needs, their performance ratings and the risk they might leave the company.
  • “It’s certainly more challenging” for employees, said Mr. Herbert, the retired sales director. “If you really don’t have the mettle, you’re asked to get on with your life’s work [elsewhere].”
  • In 2015, Kimberly-Clark retained 95% of its top performers. Among the employees whose work was rated “unacceptable” or “inconsistent,” 44% left the company voluntarily or were let go. Ms. Gottung said she is “pretty pleased” that low-performer turnover has been rising.
  • Mr. Falk, the CEO, reviews 100 senior managers’ performance plans every year to make sure their goals are ambitious and reflect company priorities. Managers are instructed to begin every meeting with a story about how someone demonstrated one of the six behaviors the company promotes, such as “build trust” or “think customer.”
  • Regular “culture of accountability” sessions train employees in giving and receiving difficult feedback. When a colleague suggests improvements, “the proper response was ‘thank you for the feedback,’ not defensiveness,” Mr. Luettgen said. Employees also practice reinforcing positive behaviors, such as praising a colleague who had given up a weekend to solve a customer complaint.
  • More than 10,000 of Kimberly-Clark’s workers used the feedback feature in Workday in 2014, and about 25% of the comments were considered “constructive,” while the rest were positive or neutral, said Sandy Allred, a senior director on the talent management team. Staffers can send feedback to peers or workers above or below them
Javier E

The Plight of the Overworked Nonprofit Employee - The Atlantic - 0 views

  • Many nonprofit organizations stare down a shared set of challenges: In a 2013 report, the Urban Institute surveyed over 4,000 nonprofits of a wide range of types and sizes across the continental U.S. It found that all kinds of nonprofits struggled with delays in payment for contracts, difficulty securing funding for the full cost of their services, and other financial issues.
  • Recent years have been especially hard for many nonprofits. Most have annual budgets of less than $1 million, and those budgets took a big hit from the recession, when federal, municipal, and philanthropic funding dried up. On top of that, because so many nonprofits depend on government money, policy changes can cause funding priorities to change, which in turn can put nonprofits in a bind.
  • The pressure from funders to tighten budgets and cut costs can produce what researchers call the “nonprofit starvation cycle.” The cycle starts with funders’ unrealistic expectations about the costs of running a nonprofit. In response, nonprofits try to spend less on overhead (like salaries) and under-report expenses to try to meet those unrealistic expectations. That response then reinforces the unrealistic expectations that began the cycle. In this light, it’s no surprise that so many nonprofits have come to rely on unpaid work.
  • ...15 more annotations...
  • Strangely, though nonprofits are increasingly expected to perform like businesses, they do not get the same leeway in funding that government-contracted businesses do. They don’t have nearly the bargaining power of big corporations, or the ability to raise costs for their products and services, because of tight controls on grant funding. “D.C. is full of millionaires who contract with government in the defense field, and they make a killing, and yet if you’re a nonprofit, chances are you aren’t getting the full amount of funding to cover the cost of the services required,” Iliff said. “Can you imagine Lockheed Martin or Boeing putting up with a government contract that didn’t allow for overhead?”
  • When faced with dwindling funding, one response would be to cut a program or reduce the number of people an organization serves. But nonprofit leaders have shown themselves very reluctant to do that. Instead, many meet financial challenges by squeezing more work out of their staffs without a proportional increase in their pay:
  • “There is this feeling that the mission is so important that nothing should get in the way of it,”
  • These nonprofit employees are saying that their operations depend on large numbers of their lowest-paid staff working unpaid overtime hours. One way to get  to that point would be to face a series of choices between increased productivity on the one hand and reduced hours, increased pay, or more hiring on the other, and to choose more productivity every time. That some nonprofits have done this speaks to a culture that can put the needs of staff behind mission-driven ambitions.
  • In the 1970s, 62 percent of full-time, salaried workers qualified for mandatory overtime pay when they worked more than 40 hours in a week. Today, because the overtime rules have not had a major update since then (until this one), only 7 percent of workers are covered, whether they work in the nonprofit sector or elsewhere. In other words, U.S. organizations—nonprofit or otherwise—have been given the gift of a large pool of laborers who, as long as they clear a relatively low earnings threshold and do tasks that meet certain criteria, do not have to be paid overtime.
  • Unsurprisingly, many nonprofits have taken advantage of that pool of free work. (For-profit companies have too, but they also have the benefit of being more in control of their revenue streams.) B
  • nonprofits like PIRG, for example, have a tradition of forcing employees to work long, unpaid hours—especially their youngest staff. “There’s a culture that says, ‘Young people are paying their dues. It’s okay for them to be paid for fewer hours than they’re actually working because it’s in the effort of helping them grow up and contribute to something greater than they are,’” Boris says.
  • “Too often, I have seen the passion for social change turned into a weapon against the very people who do much—if not most—of the hard work, and put in most of the hours,” Hastings recently wrote on her blog. “Because they are highly motivated by passion, the reasoning goes, they don’t need to be motivated by decent salaries or sustainable work hours or overtime pay.”
  • A 2011 survey of more than 2,000 nonprofit employees by Opportunity Knocks, a human-resources organization that specializes in nonprofits, in partnership with Jessica Word, an associate professor of public administration at the University of Nevada, Las Vegas, found that half of employees in the nonprofit sector may be burned out or in danger of burnout.
  • . “These are highly emotional and difficult jobs,” she said, adding, “These organizations often have very high rates of employee turnover, which results from a combination of burnout and low compensation.” Despite the dearth of research, Word’s findings don’t appear to be unusual: A more recent study of nonprofits in the U.S. and Canada found that turnover, one possible indicator of burnout, is higher in nonprofits than in the overall labor market.
  • for all their hours and emotional labor, nonprofit employees generally don’t make much money. A 2014 study by Third Sector New England, a resource center for nonprofits, found that 43 percent of nonprofit employees in New England were making less than $28,000 per year—far less than a living wage for families with children in most cities in the United States, and well below the national median income of between $40,000 and $50,000 per year.
  • Why would nonprofit workers be willing to stay in jobs where they are underpaid, or, in some cases, accept working conditions that violate the spirit of the labor laws that protect them? One plausible reason is that they are just as committed to the cause as their superiors
  • But it also might be that some nonprofits exploit gray areas in the law to cut costs. For instance, only workers who are labelled as managers are supposed to be exempt from overtime, but many employers stretch the definition of “manager” far beyond its original intent.
  • even regardless of these designations, the emotionally demanding work at many nonprofits is sometimes difficult to shoehorn into a tidy 40-hours-a-week schedule. Consider Elle Roberts, who was considered exempt from overtime restrictions and was told not to work more than 40 hours a week when, as a young college grad, she worked at a domestic-violence shelter in northwest Indiana. Doing everything from home visits to intake at the shelter, Roberts still ignored her employer’s dictates and regularly worked well more than 40 hours a week providing relief for women in crisis. Yet she was not paid for that extra time.
  • “The unspoken expectation is that you do whatever it takes to get whatever it is done for the people that you’re serving,” she says. “And anything less than that, you’re not quite doing enough.
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
Javier E

How Trump Consultants Exploited the Facebook Data of Millions - The New York Times - 0 views

  • Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”
  • “They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”
  • But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.
  • ...22 more annotations...
  • The documents also raise new questions about Facebook, which is already grappling with intense criticism over the spread of Russian propaganda and fake news. The data Cambridge collected from profiles, a portion of which was viewed by The Times, included details on users’ identities, friend networks and “likes.”
  • “Protecting people’s information is at the heart of everything we do,” Mr. Grewal said. “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”Still, he added, “it’s a serious abuse of our rules.”
  • The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.
  • Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements.
  • Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.
  • But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.
  • Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from the their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.
  • When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.
  • He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.Image
  • Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.
  • “We wanted as much as we could get,” he acknowledged. “Where it came from, who said we could have it — we weren’t really asking.”
  • The firm was effectively a shell. According to the documents and former employees, any contracts won by Cambridge, originally incorporated in Delaware, would be serviced by London-based SCL and overseen by Mr. Nix, a British citizen who held dual appointments at Cambridge Analytica and SCL. Most SCL employees and contractors were Canadian, like Mr. Wylie, or European.
  • In a memo to Mr. Bannon, Ms. Mercer and Mr. Nix, the lawyer, then at the firm Bracewell & Giuliani, warned that Mr. Nix would have to recuse himself “from substantive management” of any clients involved in United States elections. The data firm would also have to find American citizens or green card holders, Mr. Levy wrote, “to manage the work and decision making functions, relative to campaign messaging and expenditures.”
  • In summer and fall 2014, Cambridge Analytica dived into the American midterm elections, mobilizing SCL contractors and employees around the country. Few Americans were involved in the work, which included polling, focus groups and message development for the John Bolton Super PAC, conservative groups in Colorado and the campaign of Senator Thom Tillis, the North Carolina Republican.
  • While Cambridge hired more Americans to work on the races that year, most of its data scientists were citizens of the United Kingdom or other European countries, according to two former employees.
  • Under the guidance of Brad Parscale, Mr. Trump’s digital director in 2016 and now the campaign manager for his 2020 re-election effort, Cambridge performed a variety of services, former campaign officials said. That included designing target audiences for digital ads and fund-raising appeals, modeling voter turnout, buying $5 million in television ads and determining where Mr. Trump should travel to best drum up support.
  • Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”
  • But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.
  • While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.
  • Mr. Nix is seeking to take psychographics to the commercial advertising market. He has repositioned himself as a guru for the digital ad age — a “Math Man,” he puts it. In the United States last year, a former employee said, Cambridge pitched Mercedes-Benz, MetLife and the brewer AB InBev, but has not signed them on.
  • Today, as Cambridge Analytica seeks to expand its business in the United States and overseas, Mr. Nix has mentioned some questionable practices. This January, in undercover footage filmed by Channel 4 News in Britain and viewed by The Times, he boasted of employing front companies and former spies on behalf of political clients around the world, and even suggested ways to entrap politicians in compromising situations.
  • Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew
Javier E

Inside a Battle Over Race, Class and Power at Smith College - The New York Times - 0 views

  • NORTHAMPTON, Mass. — In midsummer of 2018, Oumou Kanoute, a Black student at Smith College, recounted a distressing American tale: She was eating lunch in a dorm lounge when a janitor and a campus police officer walked over and asked her what she was doing there.
  • The officer, who could have been carrying a “lethal weapon,” left her near “meltdown,” Ms. Kanoute wrote on Facebook, saying that this encounter continued a yearlong pattern of harassment at Smith.
  • “All I did was be Black,” Ms. Kanoute wrote. “It’s outrageous that some people question my being at Smith College, and my existence overall as a woman of color.”
  • ...42 more annotations...
  • The college’s president, Kathleen McCartney, offered profuse apologies and put the janitor on paid leave. “This painful incident reminds us of the ongoing legacy of racism and bias,” the president wrote, “in which people of color are targeted while simply going about the business of their ordinary lives.”
  • a law firm hired by Smith College to investigate the episode found no persuasive evidence of bias. Ms. Kanoute was determined to have eaten in a deserted dorm that had been closed for the summer; the janitor had been encouraged to notify security if he saw unauthorized people there. The officer, like all campus police, was unarmed.
  • Smith College officials emphasized “reconciliation and healing” after the incident. In the months to come they announced a raft of anti-bias training for all staff, a revamped and more sensitive campus police force and the creation of dormitories — as demanded by Ms. Kanoute and her A.C.L.U. lawyer — set aside for Black students and other students of color.
  • But they did not offer any public apology or amends to the workers whose lives were gravely disrupted by the student’s accusation.
  • The atmosphere at Smith is gaining attention nationally, in part because a recently resigned employee of the school, Jodi Shaw, has attracted a fervent YouTube following by decrying what she sees as the college’s insistence that its white employees, through anti-bias training, accept the theory of structural racism.
  • The story highlights the tensions between a student’s deeply felt sense of personal truth and facts that are at odds with it.
  • Those tensions come at a time when few in the Smith community feel comfortable publicly questioning liberal orthodoxy on race and identity, and some professors worry the administration is too deferential to its increasingly emboldened students.
  • “My perception is that if you’re on the wrong side of issues of identity politics, you’re not just mistaken, you’re evil,” said James Miller, an economics professor at Smith College and a conservative.
  • Faculty members, however, pointed to a pattern that they say reflects the college’s growing timidity in the face of allegations from students, especially around the issue of race and ethnicity.
  • In 2016, students denounced faculty at Smith’s social work program as racist after some professors questioned whether admissions standards for the program had been lowered and this was affecting the quality of the field work. Dennis Miehls, one of the professors they decried, left the school not long after.
  • This is a tale of how race, class and power collided at the elite 145-year-old liberal arts college, where tuition, room and board top $78,000 a year and where the employees who keep the school running often come from working-class enclaves beyond the school’s elegant wrought iron gates
  • “Stop demanding that I admit to white privilege, and work on my so-called implicit bias as a condition of my continued employment,”
  • Student workers were not supposed to use the Tyler cafeteria, which was reserved for a summer camp program for young children. Jackie Blair, a veteran cafeteria employee, mentioned that to Ms. Kanoute when she saw her getting lunch there and then decided to drop it. Staff members dance carefully around rule enforcement for fear students will lodge complaints.
  • “We used to joke, don’t let a rich student report you, because if you do, you’re gone,” said Mark Patenaude, a janitor.
  • A well-known older campus security officer drove over to the dorm. He recognized Ms. Kanoute as a student and they had a brief and polite conversation, which she recorded. He apologized for bothering her and she spoke to him of her discomfort: “Stuff like this happens way too often, where people just feel, like, threatened.”
  • That night Ms. Kanoute wrote a Facebook post: “It’s outrageous that some people question my being at Smith, and my existence overall as a woman of color.”
  • Her two-paragraph post hit Smith College like an electric charge. President McCartney weighed in a day later. “I begin by offering the student involved my deepest apology that this incident occurred,” she wrote. “And to assure her that she belongs in all Smith places.”
  • Ms. McCartney did not speak to the accused employees and put the janitor on paid leave that day.
  • Ms. McCartney appeared intent on making no such missteps in 2018. In an interview, she said that Ms. Kanoute deserved an apology and swift action, even before the investigation was undertaken. “It was appropriate to apologize,” Ms. McCartney said. “She is living in a context of ‘living while Black’ incidents.”The school’s workers felt scapegoated.
  • “It is safe to say race is discussed far more often than class at Smith,” said Prof. Marc Lendler, who teaches American government at the college. “It’s a feature of elite academic institutions that faculty and students don’t recognize what it means to be elite.”
  • The repercussions spread. Three weeks after the incident at Tyler House, Ms. Blair, the cafeteria worker, received an email from a reporter at The Boston Globe asking her to comment on why she called security on Ms. Kanoute for “eating while Black.” That puzzled her; what did she have to do with this?
  • The food services director called the next morning. “Jackie,” he said, “you’re on Facebook.” She found that Ms. Kanoute had posted her photograph, name and email, along with that of Mr. Patenaude, a 21-year Smith employee and janitor.
  • “This is the racist person,” Ms. Kanoute wrote of Ms. Blair, adding that Mr. Patenaude too was guilty. (He in fact worked an early shift that day and had already gone home at the time of the incident.) Ms. Kanoute also lashed the Smith administration. “They’re essentially enabling racist, cowardly acts.”
  • Ms. Blair was born and raised and lives in Northampton with her husband, a mechanic, and makes about $40,000 a year. Within days of being accused by Ms. Kanoute, she said, she found notes in her mailbox and taped to her car window. “RACIST” read one. People called her at home. “You should be ashamed of yourself,” a caller said. “You don’t deserve to live,” said another.
  • Smith College put out a short statement noting that Ms. Blair had not placed the phone call to security but did not absolve her of broader responsibility. Ms. McCartney called her and briefly apologized. That apology was not made public.
  • By September, a chill had settled on the campus. Students walked out of autumn convocation in solidarity with Ms. Kanoute. The Black Student Association wrote to the president saying they “do not feel heard or understood. We feel betrayed and tokenized.”
  • Smith officials pressured Ms. Blair to go into mediation with Ms. Kanoute. “A core tenet of restorative justice,” Ms. McCartney wrote, “is to provide people with the opportunity for willing apology, forgiveness and reconciliation.”
  • Ms. Blair declined. “Why would I do this? This student called me a racist and I did nothing,” she said.
  • On Oct. 28, 2018, Ms. McCartney released a 35-page report from a law firm with a specialty in discrimination investigations. The report cleared Ms. Blair altogether and found no sufficient evidence of discrimination by anyone else involved, including the janitor who called campus police.
  • Still, Ms. McCartney said the report validated Ms. Kanoute’s lived experience, notably the fear she felt at the sight of the police officer. “I suspect many of you will conclude, as did I,” she wrote, “it is impossible to rule out the potential role of implicit racial bias.”
  • Ms. McCartney offered no public apology to the employees after the report was released. “We were gobsmacked — four people’s lives wrecked, two were employees of more than 35 years and no apology,” said Tracey Putnam Culver, a Smith graduate who recently retired from the college’s facilities management department. “How do you rationalize that?”
  • Rahsaan Hall, racial justice director for the A.C.L.U. of Massachusetts and Ms. Kanoute’s lawyer, cautioned against drawing too much from the investigative report, as subconscious bias is difficult to prove. Nor was he particularly sympathetic to the accused workers.
  • “It’s troubling that people are more offended by being called racist than by the actual racism in our society,” he said. “Allegations of being racist, even getting direct mailers in their mailbox, is not on par with the consequences of actual racism.”
  • Ms. Blair was reassigned to a different dormitory, as Ms. Kanoute lived in the one where she had labored for many years. Her first week in her new job, she said, a female student whispered to another: There goes the racist.
  • Anti-bias training began in earnest in the fall. Ms. Blair and other cafeteria and grounds workers found themselves being asked by consultants hired by Smith about their childhood and family assumptions about race, which many viewed as psychologically intrusive. Ms. Blair recalled growing silent and wanting to crawl inside herself.
  • The faculty are not required to undergo such training. Professor Lendler said in an interview that such training for working-class employees risks becoming a kind of psychological bullying. “My response would be, ‘Unless it relates to conditions of employment, it’s none of your business what I was like growing up or what I should be thinking of,’” he said.
  • In addition to the training sessions, the college has set up “White Accountability” groups where faculty and staff are encouraged to meet on Zoom and explore their biases, although faculty attendance has fallen off considerably.
  • The janitor who called campus security quietly returned to work after three months of paid leave and declined to be interviewed. The other janitor, Mr. Patenaude, who was not working at the time of the incident, left his job at Smith not long after Ms. Kanoute posted his photograph on social media, accusing him of “racist cowardly acts.”
  • “I was accused of being the racist,” Mr. Patenaude said. “To be honest, that just knocked me out. I’m a 58-year-old male, we’re supposed to be tough. But I suffered anxiety because of things in my past and this brought it to a whole ’nother level.”
  • He recalled going through one training session after another in race and intersectionality at Smith. He said it left workers cynical. “I don’t know if I believe in white privilege,” he said. “I believe in money privilege.”
  • This past autumn the university furloughed her and other workers, citing the coronavirus and the empty dorms. Ms. Blair applied for an hourly job with a local restaurant. The manager set up a Zoom interview, she said, and asked her: “‘Aren’t you the one involved in that incident?’”
  • “I was pissed,” she said. “I told her I didn’t do anything wrong, nothing. And she said, ‘Well, we’re all set.’”
Javier E

How Facebook Failed the World - The Atlantic - 0 views

  • In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.
  • these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources.
  • Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences.
  • ...23 more annotations...
  • According to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”
  • As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.
  • last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users.
  • Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech.
  • The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents.
  • “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats.
  • In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.”
  • To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”
  • And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.
  • Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.
  • A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult.
  • “I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”
  • Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”
  • Taken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence
  • It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible.
  • Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of whack-a-mole, one with no winners and a lot of sore arms.
  • Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.
  • A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”
  • Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.”
  • Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.
  • “I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”
  • What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.
  • solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and acting in its own self-interest.
Javier E

How Elon Musk spoiled the dream of 'Full Self-Driving' - The Washington Post - 0 views

  • They said Musk’s erratic leadership style also played a role, forcing them to work at a breakneck pace to develop the technology and to push it out to the public before it was ready. Some said they are worried that, even today, the software is not safe to be used on public roads. Most spoke on the condition of anonymity for fear of retribution.
  • “The system was only progressing very slowly internally” but “the public wanted a product in their hands,” said John Bernal, a former Tesla test operator who worked in its Autopilot department. He was fired in February 2022 when the company alleged improper use of the technology after he had posted videos of Full Self-Driving in action
  • “Elon keeps tweeting, ‘Oh we’re almost there, we’re almost there,’” Bernal said. But “internally, we’re nowhere close, so now we have to work harder and harder and harder.” The team has also bled members in recent months, including senior executives.
  • ...17 more annotations...
  • “No one believed me that working for Elon was the way it was until they saw how he operated Twitter,” Bernal said, calling Twitter “just the tip of the iceberg on how he operates Tesla.”
  • In April 2019, at a showcase dubbed “Autonomy Investor Day,” Musk made perhaps his boldest prediction as Tesla’s chief executive. “By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware,” Musk told a roomful of investors. The software updates automatically over the air, and Full Self-Driving would be so reliable, he said, the driver “could go to sleep.”
  • Investors were sold. The following year, Tesla’s stock price soared, making it the most valuable automaker and helping Musk become the world’s richest person
  • To deliver on his promise, Musk assembled a star team of engineers willing to work long hours and problem solve deep into the night. Musk would test the latest software on his own car, then he and other executives would compile “fix-it” requests for their engineers.
  • Those patchwork fixes gave the illusion of relentless progress but masked the lack of a coherent development strategy, former employees said. While competitors such as Alphabet-owned Waymo adopted strict testing protocols that limited where self-driving software could operate, Tesla eventually pushed Full Self-Driving out to 360,000 owners — who paid up to $15,000 to be eligible for the features — and let them activate it at their own discretion.
  • Tesla’s philosophy is simple: The more data (in this case driving) the artificial intelligence guiding the car is exposed to, the faster it learns. But that crude model also means there is a lighter safety net. Tesla has chosen to effectively allow the software to learn on its own, developing sensibilities akin to a brain via technology dubbed “neural nets” with fewer rules, the former employees said. While this has the potential to speed the process, it boils down to essentially a trial and error method of training.
  • Radar originally played a major role in the design of the Tesla vehicles and software, supplementing the cameras by offering a reality check of what was around, particularly if vision might be obscured. Tesla also used ultrasonic sensors, shorter-range devices that detect obstructions within inches of the car. (The company announced last year it was eliminating those as well.)
  • Musk, as the chief tester, also asked for frequent bug fixes to the software, requiring engineers to go in and adjust code. “Nobody comes up with a good idea while being chased by a tiger,” a former senior executive recalled an engineer on the project telling him
  • Toward the end of 2020, Autopilot employees turned on their computers to find in-house workplace monitoring software installed, former employees said. It monitored keystrokes and mouse clicks, and kept track of their image labeling. If the mouse did not move for a period of time, a timer started — and employees could be reprimanded, up to being fired, for periods of inactivity, the former employees said.
  • Some of the people who spoke with The Post said that approach has introduced risks. “I just knew that putting that software out in the streets would not be safe,” said a former Tesla Autopilot engineer who spoke on the condition of anonymity for fear of retaliation. “You can’t predict what the car’s going to do.”
  • Some of the people who spoke with The Post attributed Tesla’s sudden uptick in “phantom braking” reports — where the cars aggressively slow down from high speeds — to the lack of radar. The Post analyzed data from the National Highway Traffic Safety Administration to show incidences surged last year, prompting a federal regulatory investigation.
  • The data showed reports of “phantom braking” rose to 107 complaints over three months, compared to only 34 in the preceding 22 months. After The Post highlighted the problem in a news report, NHTSA received about 250 complaints of the issue in a two-week period. The agency opened an investigation after, it said, it received 354 complaints of the problem spanning a period of nine months.
  • “It’s not the sole reason they’re having [trouble] but it’s big a part of it,” said Missy Cummings, a former senior safety adviser for NHTSA, who has criticized the company’s approach and recused herself on matters related to Tesla. “The radar helped detect objects in the forward field. [For] computer vision which is rife with errors, it serves as a sensor fusion way to check if there is a problem.”
  • Even with radar, Teslas were less sophisticated than the lidar and radar-equipped cars of competitors.“One of the key advantages of lidar is that it will never fail to see a train or truck, even if it doesn’t know what it is,” said Brad Templeton, a longtime self-driving car developer and consultant who worked on Google’s self-driving car. “It knows there is an object in front and the vehicle can stop without knowing more than that.”
  • Musk’s resistance to suggestions led to a culture of deference, former employees said. Tesla fired employees who pushed back on his approach. The company was also pushing out so many updates to its software that in late 2021, NHTSA publicly admonished Tesla for issuing fixes without a formal recall notice.
  • Tesla engineers have been burning out, quitting and looking for opportunities elsewhere. Andrej Karpathy, Tesla’s director of artificial intelligence, took a months-long sabbatical last year before leaving Tesla and taking a position this year at OpenAI, the company behind language-modeling software ChatGPT.
  • One of the former employees said that he left for Waymo. “They weren’t really wondering if their car’s going to run the stop sign,” the engineer said. “They’re just focusing on making the whole thing achievable in the long term, as opposed to hurrying it up.”
anonymous

Cash, Breakfasts and Firings: An All-Out Push to Vaccinate Wary Medical Workers - The N... - 0 views

  • Anxious about taking a new vaccine and scarred by a history of being mistreated, many frontline workers at hospitals and nursing homes are balking at getting inoculated against Covid-19.
  • Those opposing forces have spawned an unusual situation: In addition to educating their workers about the benefits of the Covid-19 vaccines, a growing number of employers are dangling incentives like cash, extra time off and even Waffle House gift cards for those who get inoculated, while in at least a few cases saying they will fire those who refuse.
  • “For us, this was not a tough decision,” said Lynne Katzmann, Juniper’s chief executive. “Our goal is to do everything possible to protect our residents and our team members and their families.”
  • ...11 more annotations...
  • “This is a population of people who have been historically ignored, abused and mistreated,” said Dr. Mike Wasserman, a geriatrician and former president of the California Association of Long Term Care Medicine. “It is laziness on the part of anyone to force these folks to take a vaccine. I believe that we need to be putting all of our energy into respecting, honoring and valuing the work they do and educating them on the benefits to them and the folks they take care of in getting vaccinated.”
  • At Jackson Health System in Miami, a survey of about 5,900 employees found that only half wanted to get a vaccine immediately, a hospital spokeswoman said.
  • Henry Ford Health System, which runs six hospitals in Michigan, said that as of Wednesday morning, about 22 percent of its 33,000 employees had declined to be vaccinated.
  • At Houston Methodist, a hospital system in Texas with 26,000 employees, workers who take the vaccine will be eligible for a $500 bonus. “Vaccination is not mandatory for our employees yet (but will be eventually),” Dr. Marc Boom, the hospital’s chief executive, wrote in an email to employees last month.
  • Gov. Mike DeWine of Ohio said last month that roughly 60 percent of nursing home staff members offered the vaccine in his state had declined it.
  • Underlying the hesitancy is a lack of trust in authorities — the federal government, politicians, even their employers — that have failed for the past year to get the virus under control.
  • “We are left behind in the dust — no one sticks up for us,”
  • Another concern about forcing workers to get vaccinated is that it could prompt hesitant employees to resign. That’s a particular worry in long-term care, where the pandemic has exacerbated a shortage of certified nursing assistants.
  • Both have been found to be safe and highly effective. So why are so many hospital and long-term care workers reluctant to get inoculated?
  • At Norton Healthcare, a health system in Louisville, Ky., workers who refuse the vaccine and then catch Covid-19 will generally no longer be able to take advantage of the paid medical leave that Norton has been offering to infected employees since early in the pandemic.
  • At Juniper — which has 20 senior living communities in New Jersey, Pennsylvania and Colorado — officials have tried to educate workers about the safety and benefits of Covid-19 vaccines, including hosting a webinar with a registered nurse who was enrolled in a clinical trial of the Moderna vaccine. Officials told staff last month that vaccines would be mandatory.
woodlu

Purpose and the employee | The Economist - 0 views

  • The very idea of a purposeful employee conjures up a specific type of person. They crave a meaningful job that changes society for the better. When asked about their personal passion projects, they don’t say “huh?” or “playing Wordle”. They are concerned about their legacy and almost certainly have a weird diet.
  • Bain identifies six different archetypes, far too few to reflect the complexity of individuals but a lot better than a single lump of employees.
  • “Pioneers” are the people on a mission to change the world; “artisans” are interested in mastering a specific skill; “operators” derive a sense of meaning from life outside work; “strivers” are more focused on pay and status; “givers” want to do work that directly improves the lives of others; and “explorers” seek out new experiences.
  • ...7 more annotations...
  • Having a purpose does not necessarily mean a desire to found a startup, head up the career ladder or log into virtual Davos. Some people are fired up by the prospect of learning new skills or of deepening their expertise
  • executives were far likelier than other respondents to say that their purpose was fulfilled by their job.
  • Pioneers in particular are more likely to cluster in management roles. The Bain survey finds that 25% of American executives match this archetype, but only 9% of the overall US sample does so.
  • Others derive purpose from specific kinds of responsibility.
  • People who had been working as station agents before their elevation were generally satisfied by their new roles. But supervisors who had previously worked as train drivers were noticeably less content: they felt their roles had less meaning when they no longer had direct responsibility for the well-being of passengers.
  • Firms need to think more creatively about career progression than promoting people into management jobs. IBM, for example, has a fellowship programme designed to give a handful of its most gifted technical employees their own form of recognition each year.
  • There is some logic here. Employees with a calling could well be more dedicated. But that doesn’t necessarily make them better at the job. And teams are likelier to perform well if they blend types of employees: visionaries to inspire, specialists to deliver and all those people who want to do a job well but not think about it at weekends. Like mayonnaise, the secret is in the mixture.
Javier E

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
Javier E

Alex Stamos, Facebook Data Security Chief, To Leave Amid Outcry - The New York Times - 0 views

  • One central tension at Facebook has been that of the legal and policy teams versus the security team. The security team generally pushed for more disclosure about how nation states had misused the site, but the legal and policy teams have prioritized business imperatives, said the people briefed on the matter.
  • “The people whose job is to protect the user always are fighting an uphill battle against the people whose job is to make money for the company,” said Sandy Parakilas, who worked at Facebook enforcing privacy and other rules until 2012 and now advises a nonprofit organization called the Center for Humane Technology, which is looking at the effect of technology on people.
  • Mr. Stamos said in statement on Monday, “These are really challenging issues, and I’ve had some disagreements with all of my colleagues, including other executives.” On Twitter, he said he was “still fully engaged with my work at Facebook” and acknowledged that his role has changed, without addressing his future plans.
  • ...13 more annotations...
  • Mr. Stamos joined Facebook from Yahoo in June 2015. He and other Facebook executives, such as Ms. Sandberg, disagreed early on over how proactive the social network should be in policing its own platform, said the people briefed on the matter.
  • Mr. Stamos first put together a group of engineers to scour Facebook for Russian activity in June 2016, the month the Democratic National Committee announced it had been attacked by Russian hackers, the current and former employees said.
  • By November 2016, the team had uncovered evidence that Russian operatives had aggressively pushed DNC leaks and propaganda on Facebook. That same month, Mr. Zuckerberg publicly dismissed the notion that fake news influenced the 2016 election, calling it a “pretty crazy idea
  • In the ensuing months, Facebook’s security team found more Russian disinformation and propaganda on its site, according to the current and former employees. By the spring of 2017, deciding how much Russian interference to disclose publicly became a major source of contention within the company.
  • A detailed memorandum Mr. Stamos wrote in early 2017 describing Russian interference was scrubbed for mentions of Russia and winnowed into a blog post last April that outlined, in hypothetical terms, how Facebook could be manipulated by a foreign adversary, they said. Russia was only referenced in a vague footnote. That footnote acknowledged that Facebook’s findings did not contradict a declassified January 2017 report in which the director of national intelligence concluded Russia had sought to undermine United States election, and Hillary Clinton in particular.
  • Mr. Stamos pushed to disclose as much as possible, while others including Elliot Schrage, Facebook’s vice president of communications and policy, recommended not naming Russia without more ironclad evidence, said the current and former employees.
  • By last September, after Mr. Stamos’s investigation had revealed further Russian interference, Facebook was forced to reverse course. That month, the company disclosed that beginning in June 2015, Russians had paid Facebook $100,000 to run roughly 3,000 divisive ads to show the American electorate.
  • The public reaction caused some at Facebook to recoil at revealing more, said the current and former employees. Since the 2016 election, Facebook has paid unusual attention to the reputations of Mr. Zuckerberg and Ms. Sandberg, conducting polls to track how they are viewed by the public, said Tavis McGinn, who was recruited to the company last April and headed the executive reputation efforts through September 2017.
  • Mr. McGinn, who now heads Honest Data, which has done polling about Facebook’s reputation in different countries, said Facebook is “caught in a Catch-22.”
  • “Facebook cares so much about its image that the executives don’t want to come out and tell the whole truth when things go wrong,” he said. “But if they don’t, it damages their image.”
  • Mr. McGinn said he left Facebook after becoming disillusioned with the company’s conduct.
  • By December 2017, Mr. Stamos, who reports to Facebook’s general counsel, proposed that he report directly to higher-ups. Facebook executives rejected that proposal and instead reassigned Mr. Stamos’s team, splitting the security team between its product team, overseen by Guy Rosen, and infrastructure team, overseen by Pedro Canahuati, according to current and former employees.
  • “I told them, ‘Your business is based on trust, and you’re losing trust,’” said Mr. McNamee, a founder of the Center for Humane Technology. “They were treating it as a P.R. problem, when it’s a business problem. I couldn’t believe these guys I once knew so well had gotten so far off track.”
Javier E

When FitBit can track your workplace performance: the new wearable frontier - The Washi... - 0 views

  • wearables can serve another purpose — determining whether you’re a productive employee. The data-obsessed may be quick to embrace such an assessment, but what if an employer has access to that information as well?
  • The researchers say their mobile-sensing system, which consists of fitness bracelets, sensors and a custom app, can measure employee performance with about 80 percent accuracy.
  • The system monitors physical and emotional signals that employees produce during the day and uses that data to create a performance profile over time that is designed to eliminate bias from evaluations
  • ...19 more annotations...
  • it could signal the beginning of a new era of virtual assistants that will redefine our relationships with intelligent machines.
  • providing someone with valuable insights about their productivity, stress levels during meetings or lifestyle habits that impact their ability to perform their job
  • “We set out to discover whether there was a way to move the needle from an almost backward way of assessing people’s workplace performance to using more objective measures.
  • research shows that conscientious people, who are often more detailed-oriented and disciplined, tend to be more productive
  • If it was possible to predict someone’s mental health by analyzing their social media feeds and smartphone data, Campbell wondered, could similar data be leveraged to improve employee performance evaluations?
  • The workers were fitted with a wearable fitness tracker that monitored heart functions, sleep, stress, and measurements such as weight and calorie consumption, as well as a smartphone app that tracked their physical activity, location, phone usage and ambient light.
  • Location beacons placed in the home and office measured participants time at work and breaks from their desk, giving researchers a comprehensive window into their day from one hour to the next.
  • The information was processed by cloud-based machine-learning algorithms that classified performance using factors such as the amount of time spent at the workplace, quality of sleep, physical activity and phone usage
  • “We want to use that information to empower workers to tell them whether they’re being influenced by levels of stress or sleep or other factors that may not be immediately obvious to them.”
  • What the research does not explain, he said, is what habits make someone conscientious in the first place, leaving a gap in knowledge that researchers hoped to fill.
  • “Very often when people try to detect what drives performance, they rely on personality, which actually reveals little about someone’s ability to do their job well,” he said. “Evaluations can be biased because they are infused with stereotyping of people or political influences inside an office. But when you can extract a pattern over weeks and months, we can be more certain that assessment is objective and neutral.
  • the results showed, perhaps not surprisingly, that high performers tended to have lower rates of phone usage.
  • They also experience deeper periods of sustained sleep and are more physically active than their lower performing colleagues.
  • Researchers discovered that high-performing supervisors tended to be more mobile during the day, but they visited a smaller number of distinct places during their working hours
  • High-performing non-supervisors, meanwhile, tend to spend more time at work during the weekends,
  • Future versions, they said, could be tailored to individual jobs and provide workers with meaningful information about changes in their mental well-being during meetings or suggestions for reducing stress each week
  • But they also acknowledge that the valuable private data could prove volatile if it falls into a company’s hands without employee consent. Campbell suggested there might be a middle ground, such as companies offering incentives to employees who opt into a program that treats precise assessment data as one tool among several for evaluating performance.
  • “If there was any point down the road where I could have an application on my phone that could provide an objective assessment of my performance, that might be an incentive for workers to use it," he said. “Imagine being able to say, ‘Here’s the evidence that I deserve to be promoted or that my boss is standing in my way.’"
  • “I can’t really look into a crystal ball, but I’m hopeful this passive sensing technology will be used to empower the workforce rather than used against them," he added.
nrashkind

FEMA Says at Least 7 People at the Disaster Agency Have the Coronavirus - The New York ... - 0 views

  • The agency leading the nation’s coronavirus response said that seven of its employees had tested positive for the virus with another four cases pending
  • Union leaders last week had asked the agency, Federal Emergency Management Agency, how many employees had tested positive, and in which offices, so that workers who might have interacted with those people could decide whether to get tested as well
  • But those steps only go so far.
  • ...9 more annotations...
  • “If we’re out there handing out masks and gloves, and we’ve got Covid, then they’re contaminated,” said Mr. Reaves, referring to the disease caused by the coronavirus.
  • The concern over the health and safety of FEMA employees comes as the agency is already stretched thin by three years of major natural disasters.
  • The virus, however, is forcing the agency to rethink that approach. It has urged its staff to work from home when possible, and distance themselves from their colleagues when it isn’t. FEMA has also restricted the number of disaster victims who are allowed inside its field offices at once, and has made it easier for states to shelter victims in hotels or other settings where they don’t have to be crammed together.
  • “FEMA has taken every precaution recommended by the C.D.C. to protect all employees,” Ms. Litzow added, referring to the Centers for Disease Control and Prevention.
  • Mr. Reaves said that at least two other people who worked in the office had since told him that they were self-isolating out of concern that they were exposed.
  • Some FEMA officials had grown concerned over how crowded its headquarters had become since President Trump tapped the disaster agency to lead his administration’s response to the coronavirus.
  • FEMA’s communications office did not say if any employees are self-isolating because they have symptoms.
  • The office also didn’t comment on its decision to decline the union’s request to find out which offices have had confirmed cases.
  • In its letter to the union, the agency suggested that providing that information could violate employees’ privacy. At some FEMA locations, the agency said
nrashkind

Exclusive: Amazon entices warehouse employees to grocery unit with higher pay - Reuters - 0 views

  • Amazon.com Inc is offering higher pay to recruit its own warehouse employees to pick and pack Whole Foods groceries amid rising demand and a worker shortage, according to an internal document reviewed by Reuters.
  • This move, known as labor sharing, highlights how the ecommerce giant is reallocating some of its vast workforce to handle a spike in online sales of groceries, as millions of American are stuck at home amid the COVID-19 outbreak.
  • Workers in other states where Amazon operates grocery services have received similar communications, including California, Nevada, and Tennessee.
  • ...6 more annotations...
  • Employees who are selected to make the switch can make $19 per hour, a $2 raise on top of the pay hike Amazon announced earlier this month.
  • “As we continue to see a significant increase in demand for grocery orders, we are offering temporary opportunities for associates across our fulfillment network to provide additional support,” an Amazon spokesperson said on late Friday, confirming the action.
  • Amazon has been doubling down on the grocery industry since its $13.7 billion acquisition of Whole Foods in 2017.
  • Since the outbreak, grocery delivery has become a lifeline for people to get household staples while trying to avoid stepping outside.
  • So far, coronavirus has spread to at least 17 Amazon warehouses in the U.S., prompting workers and lawmakers to question if enough safety measures have been taken to protect employees on the frontlines.
  • Grocery stores are competing for workers to fulfill online orders. Walmart, with a fast growing online grocery business, plans to hire 150,000 employees at stores, distribution and fulfillment centers through May.
nrashkind

Amazon is sued over warehouses after New York worker brings coronavirus home, cousin di... - 0 views

  • Amazon.com Inc has been sued for allegedly fostering the spread of the coronavirus by mandating unsafe working conditions, causing at least one employee to contract COVID-19, bring it home, and see her cousin die.
  • The complaint was filed on Wednesday in the federal court in Brooklyn, New York, by three employees of the JFK8 fulfillment center in Staten Island, and by family members.
  • One employee, Barbara Chandler, said she tested positive for COVID-19 in March and later saw several household members become sick, including a cousin who died on April 7.
  • ...6 more annotations...
  • It said Amazon forces employees to work at “dizzying speeds, even if doing so prevents them from socially distancing, washing their hands, and sanitizing their work spaces.”
  • Unions, elected officials and some employees have faulted Amazon’s treatment of workers, including the firing of some critical of warehouse conditions.
  • Chief Executive Jeff Bezos said last week that Amazon has not fired people for such criticism.
  • Amazon is spending more than $800 million on coronavirus safety in this year’s first half, including cleaning, temperature checks and face masks.
  • Amazon ended 2019 with 798,000 full- and part-time employees.
  • The case is Palmer et al v Amazon.com Inc., U.S. District Court, Eastern District of New York, No. 20-02468.
1 - 20 of 529 Next › Last »
Showing 20 items per page