Skip to main content

Home/ History Readings/ Group items tagged FTC

Rss Feed Group items tagged

Javier E

Opinion | Lina Khan: We Must Regulate A.I. Here's How. - The New York Times - 0 views

  • The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s.
  • Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
  • These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law
  • ...10 more annotations...
  • What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
  • The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
  • the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.
  • we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
  • Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.
  • generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply.
  • bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
  • we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
  • these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination
  • We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Facebook's Push for Facial Recognition Prompts Privacy Alarms - The New York Times - 0 views

  • Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people. Facebook has said that users are in charge of that process, telling them: “You control face recognition.
  • But critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off.
  • Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data
  • ...12 more annotations...
  • In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool.
  • People could turn it off. But privacy experts said Facebook had neither obtained users’ opt-in consent for the technology nor explicitly informed them that the company could benefit from scanning their photos
  • Separately, privacy and consumer groups lodged a complaint with the Federal Trade Commission in April saying Facebook added facial recognition services, like the feature to help identify impersonators, without obtaining prior consent from people before turning it on. The groups argued that Facebook violated a 2011 consent decree that prohibits it from deceptive privacy practices
  • Critics said Facebook took an early lead in consumer facial recognition services partly by turning on the technology as the default option for users. In 2010, it introduced a photo-labeling feature called Tag Suggestions that used face-matching software to suggest the names of people in users’ photos.
  • “Facebook is somehow threatening me that, if I do not buy into face recognition, I will be in danger,” said Viviane Reding, the former justice commissioner of the European Commission who is now a member of the European Parliament. “It goes completely against the European law because it tries to manipulate consent.”
  • “When Tag Suggestions asks you ‘Is this Jill?’ you don’t think you are annotating faces to improve Facebook’s face recognition algorithm,” said Brian Brackeen, the chief executive of Kairos, a facial recognition company. “Even the premise is an unfair use of people’s time and labor.”
  • The huge trove of identified faces, he added, enabled Facebook to quickly develop one of the world’s most powerful commercial facial recognition engines. In 2014, Facebook researchers said they had trained face-matching software “on the largest facial dataset to date, an identity labeled dataset of four million facial images.”
  • Facebook may only be getting started with its facial recognition services. The social network has applied for various patents, many of them still under consideration, which show how it could use the technology to track its online users in the real world.
  • One patent application, published last November, described a system that could detect consumers within stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases, the document said.
  • Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones
  • But legal filings in the class-action suit hint at the technology’s importance to Facebook’s business.
  • If the suit were to move forward, Facebook’s lawyers argued in a recent court document, “the reputational and economic costs to Facebook will be irreparable.”
Javier E

How 'Stealth' Consolidation Is Undermining Competition - WSJ - 0 views

  • Big tech and big mergers get the headlines, but the real monopoly problem is beneath the surface. In numerous industries and regions, competition has declined and corporate concentration risen through acquisitions often too small to draw the scrutiny of antitrust watchdogs.
  • The number of enforcement cases brought by the Justice Department’s antitrust division against alleged anticompetitive agreements and monopolistic behavior has plummeted in the past decade
  • he FTC, while continuing to challenge mergers resulting in just two to four competitors, has since the mid-2000s been less likely to challenge mergers that result in five to eight competitors.
  • ...9 more annotations...
  • Until 2001, deals worth more than $15 million had to be reported to the antitrust authorities. That year, the threshold was raised and indexed to economic growth, and is now $90 million.
  • For transactions involving few tangible assets such as in technology and pharmaceutical startups, the threshold is $360 million.
  • after the 2001 changes, the number of merger notifications dropped 70% and the number of mergers that didn’t require notification jumped nearly 50%
  • Before the change, around a third of merger investigations involved deals worth less than $50 million. After the change, the number of such deals investigated fell to close to zero.
  • between 1997 and 2017 more than 4,000 acquisitions of kidney dialysis centers were proposed. About half were above the reporting threshold, and in 265 cases, the FTC required divestitures to resolve competition concerns
  • Among the half below the threshold, the FTC required just three divestitures. Two companies controlled about 31% of facilities in 1997. By 2016, two companies, DaVita Inc. and Fresenius Medical Care , controlled 77% of facilities
  • his preliminary results suggest the numbers of nurses per technician decline and patients per hemodialysis machine rise at facilities acquired in mergers below the reporting threshold. That, he said, could be evidence of reduced quality of care, though he acknowledged it could also reflect increased efficiency.
  • 22% of markets for physicians are highly concentrated (according to federal guidelines), and they got that way mostly via acquisitions too small to be reported.
  • pharmaceutical companies often halt development of competing drugs at startups they acquire, especially when the acquisition is just small enough to escape antitrust reporting requirements.
Javier E

Acxiom, the Quiet Giant of Consumer Database Marketing - NYTimes.com - 0 views

  • Acxiom. But analysts say it has amassed the world’s largest commercial database on consumers — and that it wants to know much, much more. Its servers process more than 50 trillion data “transactions” a year. Company executives have said its database contains information about 500 million active consumers worldwide, with about 1,500 data points per person. That includes a majority of adults in the United States.
  • But privacy advocates say they are more troubled by data brokers’ ranking systems, which classify some people as high-value prospects, to be offered marketing deals and discounts regularly, while dismissing others as low-value — known in industry slang as “waste.”
  • Julie Brill, a member of the Federal Trade Commission, says she would like data brokers in general to tell the public about the data they collect, how they collect it, whom they share it with and how it is used. “If someone is listed as diabetic or pregnant, what is happening with this information? Where is the information
  • ...14 more annotations...
  • It has recruited talent from Microsoft, Google, Amazon.com and Myspace and is using a powerful, multiplatform approach to predicting consumer behavior that could raise its standing among investors and clients.
  • Acxiom has its own classification system, PersonicX, which assigns consumers to one of 70 detailed socioeconomic clusters and markets to them accordingly. In this situation, it pegs Mr. Hughes as a “savvy single” — meaning he’s in a cluster of mobile, upper-middle-class people who do their banking online, attend pro sports events, are sensitive to prices — and respond to free-shipping offers.
  • Analysts say companies design these sophisticated ecosystems to prompt consumers to volunteer enough personal data — like their names, e-mail addresses and mobile numbers — so that marketers can offer them customized appeals any time, anywhere.
  • Acxiom maintains its own database on about 190 million individuals and 126 million households in the United States. Separately, it manages customer databases for or works with 47 of the Fortune 100 companies. It also worked with the government after the September 2001 terrorist attacks
  • This year, Advertising Age ranked Epsilon, another database marketing firm, as the biggest advertising agency in the United States, with Acxiom second.
  • it’s as if the ore of our data-driven lives were being mined, refined and sold to the highest bidder, usually without our knowledge — by companies that most people rarely even know exist.
  • if marketing algorithms judge certain people as not worthy of receiving promotions for higher education or health services, they could have a serious impact.
  • “Over time, that can really turn into a mountain of pathways not offered, not seen and not known about,”
  • Unlike consumer reporting agencies that sell sensitive financial information about people for credit or employment purposes, database marketers aren’t required by law to show consumers their own reports and allow them to correct errors.
  • ACXIOM’S Consumer Data Products Catalog offers hundreds of details — called “elements” — that corporate clients can buy about individuals or households, to augment their own marketing databases.
  • the catalog also offers delicate information that has set off alarm bells among some privacy advocates, who worry about the potential for misuse by third parties that could take aim at vulnerable groups. Such information includes consumers’ interests — derived, the catalog says, “from actual purchases and self-reported surveys” — like “Christian families,” “Dieting/Weight Loss,” “Gaming-Casino,” “Money Seekers” and “Smoking/Tobacco.” Acxiom also sells data about an individual’s race, ethnicity and country of origin. “Our Race model,” the catalog says, “provides information on the major racial category: Caucasians, Hispanics, African-Americans, or Asians.” Competing companies sell similar data.
  • “At the same time, this is ethnic profiling,” he says. “The people on this list, they are being sold based on their ethnic stereotypes. There is a very strong citizen’s right to have a veto over the commodification of their profile.”
  • race coding may be incorrect. And even if a data broker has correct information, a person may not want to be marketed to based on race.
  • In its system, a store clerk need only “capture the shopper’s name from a check or third-party credit card at the point of sale and then ask for the shopper’s ZIP code or telephone number.” With that data Acxiom can identify shoppers within a 10 percent margin of error, it says, enabling stores to reward their best customers with special offers. Other companies offer similar services. “This is a direct way of circumventing people’s concerns about privacy,” says Mr. Chester of the Center for Digital Democracy.
Javier E

Amazon's Antitrust Antagonist Has a Breakthrough Idea - The New York Times - 0 views

  • “Ideas and assumptions that it was heretical to question are now openly being contested,” she said. “We’re finally beginning to examine how antitrust laws, which were rooted in deep suspicion of concentrated private power, now often promote it.”
  • Like many a wonkish youth, Ms. Khan headed to Washington after graduating in 2010, applying for a position at the left-leaning New America Foundation. Barry Lynn, who headed the organization’s Open Markets antimonopoly initiative, seized on her application. “It’s so much easier to teach public policy to people who already know how to write than teach writing to public policy experts,” said Mr. Lynn, a former journalist
  • “The long-term interests of consumers include product quality, variety and innovation — factors best promoted through both a robust competitive process and open markets,” she wrote.
  • ...7 more annotations...
  • “It’s one thing to say that antitrust enforcement has gotten far too weak,” said Daniel Crane, a University of Michigan scholar who doesn’t agree with Ms. Khan but credits her with opening up a much-needed debate. “It’s a bridge much further to say we should go back to the populist goal of leveling playing fields and checking ‘bigness.’ ”
  • Her father was a management consultant; her mother an executive in information services. Ms. Khan went to Williams College, where she wrote a thesis on the political philosopher Hannah Arendt. She was the editor of the student paper but worked hard at everything.
  • Her Yale Law Journal paper argued that monopoly regulators who focus on consumer prices are thinking too short-term. In Ms. Khan’s view, a company like Amazon — one that sells things, competes against others selling things, and owns the platform where the deals are done — has an inherent advantage that undermines fair competition.
  • “The whole country has been struggling to understand why the economy is not operating in the right way,” Mr. Cicilline said. “Wages have remained stagnant. Workers have less and less power. All we’re trying to do is create a level playing field, and that’s harder when you have megacompanies that make it virtually impossible for small competitors.” He added, “We’re at the very beginning of solutions to this.”
  • The battle for intellectual supremacy takes place less these days in learned journals and more on social media, where tongues are sharp and branding is all. This is not Ms. Khan’s strong suit. She is always polite, even on Twitter. One consequence is that she didn’t give much thought about what to call the movement to reboot antitrust. Neither did anyone else
  • Mr. Chopra, with Ms. Khan’s assistance, pushed the argument further on Sept. 6 with a 14-page official comment that suggested the F.T.C. bring back a tool buried in its toolbox: the ability to make rules.Contemporary antitrust regulation, the commissioner wrote, is conducted in the courts, which makes it numbingly slow and dependent on high-paid expert witnesses. He called for the agency to use its authority to issue rules that would “advance clarity and certainty” about what is, and what is not, an unfair method of competition.
  • From Amazon’s point of view, however, it is a problem indeed that Ms. Khan concludes in the Yale paper that regulating parts of the company like a utility “could make sense.” She also said it “could make sense” to treat Amazon’s e-commerce operation like a bridge, highway, port, power grid or telephone network — all of which are required to allow access to their infrastructure on a nondiscriminatory basis.
runlai_jiang

Your Location Data Is Being Sold-Often Without Your Knowledge - WSJ - 0 views

  • like that Jack in the Box ad that appears whenever you get near one, in whichever app you have open at the time—and as popular apps harvest your lucrative location data, the potential for leaking or exploiting this data has never been higher.
  • Every time you say “yes” to an app that asks to know your location, you are also potentially authorizing that app to sell your data.
  • They aim to compile a complete record of where everyone in America spends their time, in order to chop those histories into market segments to sell to corporate advertisers.
  • ...10 more annotations...
  • The data required to serve you any single ad may pass through many companies’ systems in milliseconds—from data broker to ad marketplace to an agency’s custom system.
  • Another way you can be tracked without your knowing it is through any open Wi-Fi hot spot you might pass. If your phone’s Wi-Fi is on, you’re constantly broadcasting a unique MAC address and a history of past Wi-Fi connections.
  • is that with most individual data vendors holding only parts of your data, your complete, identifiable profile is never all in one place. Giants like Google and Facebook , who do have all your data in one place, say they are diligent about throwing away or not gathering what they don’t need, and eliminating personally identifying information from the remainder.
  • A map of the U.S., showing areas of unusually high visits to sites where location-based advertising firm Groundtruth pushes ads to mobile devices.
  • There are plenty of ways to track you without getting your permission. Some of the most intrusive are the easiest to implement. Your telco knows where you are at all times, because it knows which cell towers your phone is near. In the U.S., how much data service-providers sell is up to them.
  • Retailers sometimes use these addresses to identify repeat customers, and they can also use them to track you as you go from one of their stores to another.
  • WeatherBug, one of the most popular weather apps for Android and iPhone, is owned by the location advertising company GroundTruth. It’s a natural fit: Weather apps need to know where you are and provide value in exchange for that information.
  • Every month GroundTruth tracks 70 million people in the U.S. as they go to work in the morning, come home at night, surge in and out of public events, take vacations, you name it.
  • Companies like Acxiom could be prime targets for hackers, said Chandler Givens, chief executive of TrackOff, which develops software to protect user identity and personal information
  • Nearly every year, a bill comes up in the Senate or House that would regulate our data privacy—the most recent was in the wake of the Equifax breach—but none has passed. In some respects, the U.S. appears to be moving backward on privacy protections.
lmunch

Opinion | The Internet's 'Dark Patterns' Need to Be Regulated - The New York Times - 0 views

  • Consider Amazon. The company perfected the one-click checkout. But canceling a $119 Prime subscription is a labyrinthine process that requires multiple screens and clicks.
  • These are examples of “dark patterns,” the techniques that companies use online to get consumers to sign up for things, keep subscriptions they might otherwise cancel or turn over more personal data. They come in countless variations: giant blinking sign-up buttons, hidden unsubscribe links, red X’s that actually open new pages, countdown timers and pre-checked options for marketing spam. Think of them as the digital equivalent of trying to cancel a gym membership.
  • Last year, the F.T.C. fined the parent company of the children’s educational program ABCmouse $10 million over what it said were tactics to keep customers paying as much as $60 annually for the service by obscuring language about automatic renewals and forcing users through six or more screens to cancel.
  • ...4 more annotations...
  • Donald Trump’s 2020 campaign, for instance, used a website with pre-checked boxes that committed donors to give far more money than they had intended, a recent Times investigation found. That cost some consumers thousands of dollars that the campaign later repaid.
  • “While there’s nothing inherently wrong with companies making money, there is something wrong with those companies intentionally manipulating users to extract their data,” said Representative Lisa Blunt Rochester, a Delaware Democrat, at the F.T.C. event. She said she planned to introduce dark pattern legislation later this year.
  • More than one in 10 e-commerce sites rely on dark patterns, according to another study, which also found that many online customer testimonials (“I wouldn’t buy any other brand!”) and tickers counting recent purchases (“7,235 customers bought this service in the past week”) were phony, randomly generated by software programs.
  • “The internet shouldn’t be the Wild West anymore — there’s just too much traffic,” said a Loyola Law School professor, Lauren Willis, at the F.T.C. event. “We need stop signs and street signs to enable consumers to shop easily, accurately.”
Javier E

Opinion | Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times - 0 views

  • Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.
  • The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.
  • We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.
  • ...17 more annotations...
  • Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications.
  • With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer.
  • This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities.
  • This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth.
  • Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind.
  • Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems
  • Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society.
  • History has repeatedly demonstrated that control over information is central to who has power and what they can do with it.
  • Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.
  • At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton
  • For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization and fear of killing the golden (donor) goose or undermining national security mean that most members of Congress would still rather look away.
  • To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.
  • Today, those countervailing forces either don’t exist or are greatly weakened
  • Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies
  • We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do
  • Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms
  • Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
criscimagnael

TikTok Ukraine War Videos Raise Questions About Spread of Misinformation - The New York... - 0 views

  • “What I see on TikTok is more real, more authentic than other social media,” said Ms. Hernandez, a student in Los Angeles. “I feel like I see what people there are seeing.”
  • But what Ms. Hernandez was actually viewing and hearing in the TikTok videos was footage of Ukrainian tanks taken from video games, as well as a soundtrack that was first uploaded to the app more than a year ago.
  • TikTok, the Chinese-owned video app known for viral dance and lip-syncing videos, has emerged as one of the most popular platforms for sharing videos and photos of the Russia-Ukraine war. Over the past week, hundreds of thousands of videos about the conflict have been uploaded to the app from across the world, according to a review by The Times. The New Yorker has called the invasion the world’s “first TikTok war.”
  • ...11 more annotations...
  • Many popular TikTok videos of the invasion — including of Ukrainians livestreaming from their bunkers — offer real accounts of the action, according to researchers who study the platform. But other videos have been impossible to authenticate and substantiate. Some simply appear to be exploiting the interest in the invasion for views, the researchers said.
  • The clip was then used in many TikTok videos, some of which included a note stating that all 13 soldiers had died. Ukrainian officials later said in a Facebook post that the men were alive and had been taken prisoner, but the TikTok videos have not been corrected.
  • “People trust it. The result is that a lot of people are seeing false information about Ukraine and believing it.”
  • TikTok and other social media platforms are also under pressure from U.S. lawmakers and Ukrainian officials to curb Russian misinformation about the war, especially from state-backed media outlets such as Russia Today and Sputnik.
  • For years, TikTok largely escaped sustained scrutiny about its content. Unlike Facebook, which has been around since 2004, and YouTube, which was founded in 2005, TikTok only became widely used in the past five years.
  • The app has navigated some controversies in the past. It has faced questions over harmful fads that appeared to originate on its platform, as well as whether it allows underage users and adequately protects their privacy.
  • That includes TikTok’s algorithm for its “For You” page, which suggests videos based on what people have previously seen, liked or shared. Viewing one video with misinformation likely leads to more videos with misinformation being shown, Ms. Richards said.
  • But audio can be misused and taken out of context, Ms. Richards said.
  • “Video is the hardest format to moderate for all platforms,” said Alex Stamos, the director of the Stanford Internet Observatory and a former head of security at Facebook. “When combined with the fact that TikTok’s algorithm is the primary factor for what content a user sees, as opposed to friendships or follows on the big U.S. platforms, this makes TikTok a uniquely potent platform for viral propaganda.”
  • “I feel like lately, the videos I’m seeing are designed to get me riled up, or to emotionally manipulate me,” she said. “I get worried so now, sometimes, I find myself Googling something or checking the comments to see if it is real before I trust it.”
  • “I guess I don’t really know what war looks like,” she said. “But we go to TikTok to learn about everything, so it makes sense we would trust it about this too.”
Javier E

Why Didn't the Government Stop the Crypto Scam? - 1 views

  • Securities and Exchange Commission Chair Gary Gensler, who took office in April of 2021 with a deep background in Wall Street, regulatory policy, and crypto, which he had taught at MIT years before joining the SEC. Gensler came in with the goal of implementing the rule of law in the crypto space, which he knew was full of scams and based on unproven technology. Yesterday, on CNBC, he was again confronted with Andrew Ross Sorkin essentially asking, “Why were you going after minor players when this Ponzi scheme was so flagrant?”
  • Cryptocurrencies are securities, and should fit under securities law, which would have imposed rules that would foster a de facto ban of the entire space. But since regulators had not actually treated them as securities for the last ten years, a whole new gray area of fake law had emerged
  • Almost as soon as he took office, Gensler sought to fix this situation, and treat them as securities. He began investigating important players
  • ...22 more annotations...
  • But the legal wrangling to just get the courts to treat crypto as a set of speculative instruments regulated under securities law made the law moot
  • In May of 2022, a year after Gensler began trying to do something about Terra/Luna, Kwon’s scheme blew up. In a comically-too-late-to-matter gesture, an appeals court then said that the SEC had the right to compel information from Kwon’s now-bankrupt scheme. It is absolute lunacy that well-settled law, like the ability for the SEC to investigate those in the securities business, is now being re-litigated.
  • many crypto ‘enthusiasts’ watching Gensler discuss regulation with his predecessor “called for their incarceration or worse.”
  • it wasn’t just the courts who were an impediment. Gensler wasn’t the only cop on the beat. Other regulators, like those at the Commodities Futures Trading Commission, the Federal Reserve, or the Office of Comptroller of the Currency, not only refused to take action, but actively defended their regulatory turf against an attempt from the SEC to stop the scams.
  • Behind this was the fist of political power. Everyone saw the incentives the Senate laid down when every single Republican, plus a smattering of Democrats, defeated the nomination of crypto-skeptic Saule Omarova in becoming the powerful bank regulator at the Comptroller of the Currency
  • Instead of strong figures like Omarova, we had a weakling acting Comptroller Michael Hsu at the OCC, put there by the excessively cautious Treasury Secretary Janet Yellen. Hsu refused to stop bank interactions with crypto or fintech because, as he told Congress in 2021, “These trends cannot be stopped.”
  • It’s not just these regulators; everyone wanted a piece of the bureaucratic pie. In March of 2022, before it all unraveled, the Biden administration issued an executive order on crypto. In it, Biden said that virtually every single government agency would have a hand in the space.
  • That’s… insane. If everyone’s in charge, no one is.
  • And behind all of these fights was the money and political prestige of some most powerful people in Silicon Valley, who were funding a large political fight to write the rules for crypto, with everyone from former Treasury Secretary Larry Summers to former SEC Chair Mary Jo White on the payroll.
  • (Even now, even after it was all revealed as a Ponzi scheme, Congress is still trying to write rules favorable to the industry. It’s like, guys, stop it. There’s no more bribe money!)
  • Moreover, the institution Gensler took over was deeply weakened. Since the Reagan administration, wave after wave of political leader at the SEC has gutted the place and dumbed down the enforcers. Courts have tied up the commission in knots, and Congress has defanged it
  • Under Trump crypto exploded, because his SEC chair Jay Clayton had no real policy on crypto (and then immediately went into the industry after leaving.) The SEC was so dormant that when Gensler came into office, some senior lawyers actually revolted over his attempt to make them do work.
  • In other words, the regulators were tied up in the courts, they were against an immensely powerful set of venture capitalists who have poured money into Congress and D.C., they had feeble legal levers, and they had to deal with ‘crypto enthusiasts' who thought they should be jailed or harmed for trying to impose basic rules around market manipulation.
  • The bottom line is, Gensler is just one regulator, up against a lot of massed power, money, and bad institutional habits. And we as a society simply made the choice through our elected leaders to have little meaningful law enforcement in financial markets, which first became blindingly obvious in 2008 during the financial crisis, and then became comical ten years later when a sector whose only real use cases were money laundering
  • , Ponzi scheming or buying drugs on the internet, managed to rack up enough political power to bring Tony Blair and Bill Clinton to a conference held in a tax haven billed as ‘the future.’
  • It took a few years, but New Dealers finally implemented a workable set of securities rules, with the courts agreeing on basic definitions of what was a security. By the 1950s, SEC investigators could raise an eyebrow and change market behavior, and the amount of cheating in finance had dropped dramatically.
  • By 1935, the New Dealers had set up a new agency, the Securities and Exchange Commission, and cleaned out the FTC. Yet there was still immense concern that Roosevelt had not been able to tame Wall Street. The Supreme Court didn’t really ratify the SEC as a constitutional body until 1938, and nearly struck it down in 1935 when a conservative Supreme Court made it harder for the SEC to investigate cases.
  • Institutional change, in other words, takes time.
  • It’s a lesson to remember as we watch the crypto space melt down, with ex-billionaire Sam Bankman-Fried
  • It’s not like perfidy in crypto was some hidden secret. At the top of the market, back in December 2021, I wrote a piece very explicitly saying that crypto was a set of Ponzi schemes. It went viral, and I got a huge amount of hate mail from crypto types
  • one of the more bizarre aspects of the crypto meltdown is the deep anger not just at those who perpetrated it, but at those who were trying to stop the scam from going on. For instance, here’s crypto exchange Coinbase CEO Brian Armstrong, who just a year ago was fighting regulators vehemently, blaming the cops for allowing gambling in the casino he helps run.
  • FTX.com was an offshore exchange not regulated by the SEC. The problem is that the SEC failed to create regulatory clarity here in the US, so many American investors (and 95% of trading activity) went offshore. Punishing US companies for this makes no sense.
Javier E

Some Silicon Valley VCs Are Becoming More Conservative - The New York Times - 0 views

  • The circle of Republican donors in the nation’s tech capital has long been limited to a few tech executives such as Scott McNealy, a founder of Sun Microsystems; Meg Whitman, a former chief executive of eBay; Carly Fiorina, a former chief executive of Hewlett-Packard; Larry Ellison, the executive chairman of Oracle; and Doug Leone, a former managing partner of Sequoia Capital.
  • But mostly, the tech industry cultivated close ties with Democrats. Al Gore, the former Democratic vice president, joined the venture capital firm Kleiner Perkins in 2007. Over the next decade, tech companies including Airbnb, Google, Uber and Apple eagerly hired former members of the Obama administration.
  • During that time, Democrats moved further to the left and demonized successful people who made a lot of money, further alienating some tech leaders, said Bradley Tusk, a venture capital investor and political strategist who supports Mr. Biden.
  • ...13 more annotations...
  • after Mr. Trump won the election that year, the world seemed to blame tech companies for his victory. The resulting “techlash” against Facebook and others caused some industry leaders to reassess their political views, a trend that continued through the social and political turmoil of the pandemic.
  • The start-up industry has also been in a downturn since 2022, with higher interest rates sending capital fleeing from risky bets and a dismal market for initial public offerings crimping opportunities for investors to cash in on their valuable investments.
  • Some investors said they were frustrated that his pick for chair of the Federal Trade Commission, Lina Khan, has aggressively moved to block acquisitions, one of the main ways venture capitalists make money. They said they were also unhappy that Mr. Biden’s pick for head of the Securities and Exchange Commission, Gary Gensler, had been hostile to cryptocurrency companies.
  • Last month, Mr. Sacks, Mr. Thiel, Elon Musk and other prominent investors attended an “anti-Biden” dinner in Hollywood, where attendees discussed fund-raising and ways to oppose Democrats,
  • Some also said they disliked Mr. Biden’s proposal in March to raise taxes, including a 25 percent “billionaire tax” on certain holdings that could include start-up stock, as well as a higher tax rate on profits from successful investments.
  • “If you keep telling someone over and over that they’re evil, they’re eventually not going to like that,” he said. “I see that in venture capital.”
  • Some tech investors are also fuming over how Mr. Biden has handled foreign affairs and other issues.
  • Mr. Andreessen, a founder of Andreessen Horowitz, a prominent Silicon Valley venture firm, said in a recent podcast that “there are real issues with the Biden administration.” Under Mr. Trump, he said, the S.E.C. and F.T.C. would be headed by “very different kinds of people.” But a Trump presidency would not necessarily be a “clean win” either, he added.
  • Mr. Sacks said at the tech conference last week that he thought such taxes could kill the start-up industry’s system of offering stock options to founders and employees. “It’s a good reason for Silicon Valley to think really hard about who it wants to vote for,” he said.
  • “Tech, venture capital and Silicon Valley are looking at the current state of affairs and saying, ‘I’m not happy with either of those options,’” he said. “‘I can no longer count on Democrats to support tech issues, and I can no longer count on Republicans to support business issues.’”
  • Ben Horowitz, a founder of Andreessen Horowitz, wrote in a blog post last year that the firm would back any politician who supported “an optimistic technology-enabled future” and oppose any who did not. Andreessen Horowitz has donated $22 million to Fairshake, a political action group focused on supporting crypto-friendly lawmakers.
  • Venture investors are also networking with lawmakers in Washington at events like the Hill & Valley conference in March, organized by Jacob Helberg, an adviser to Palantir, a tech company co-founded by Mr. Thiel. At that event, tech executives and investors lobbied lawmakers against A.I. regulations and asked for more government spending to support the technology’s development in the United States.
  • This month, Mr. Helberg, who is married to Mr. Rabois, donated $1 million to the Trump campaign
1 - 14 of 14
Showing 20 items per page