Skip to main content

Home/ History Readings/ Group items tagged paypal

Rss Feed Group items tagged

yelpreviews57

Buy Verified PayPal Account - Old/New USA, UK, CA Countries - 0 views

  •  
    Do you know why you should buy verified paypal account? Verified PayPal account give you a security about your money. If your account is unverified, your account may be limited at any time. If your account got limit and if have money on your account then you can't withdraw any amount from your account. But if your account is fully verified by real USA/UK/CA documents then no chance to got limit within 1 or 2 years. So, when you buy a PayPal account please don't buy unverified PayPal account. Please buy verified paypal account and protect your money till 2 years or more.
Javier E

For Stanford Class of '94, a Gender Gap More Powerful Than the Internet - NYTimes.com - 0 views

  • “The Internet was supposed to be the great equalizer,” said Gina Bianchini, the woman who had appeared on the cover of Fortune. “So why hasn’t our generation of women moved the needle?”
  • identity politics pushed many people into homogeneous groups; Scott Walker, one of the only African-Americans in the class to try founding a start-up, said in an interview that he regretted spending so much time at his all-black fraternity, which took him away from the white friends from freshman year who went on to found and then invest in technology companies.
  • If the dawn of the start-up era meant that consumer-oriented ideas were becoming more important than proprietary technology, he asked himself aloud, shouldn’t more women have flooded in?
  • ...21 more annotations...
  • But with the web, “all of the sudden we began moving to a market where first mover advantage became enormous,” he said. Connection speeds were growing faster, Americans were starting to shop online, and multiplying e-commerce sites fought gladiatorial battles to control most every area of spending.
  • But there were still many hoops women had greater trouble jumping through — components that had to be custom-built, capital that needed to be secured from a small number of mostly male-run venture firms.
  • “The notion that diversity in an early team is important or good is completely wrong,” he added. “The more diverse the early group, the harder it is for people to find common ground.”
  • David Sacks, on the other hand, was unmarried and unencumbered, and in 1999 he left politics, his law degree and a job at the consulting firm McKinsey & Company to join his Stanford Review friends at a technology start-up, because of “the desire to live on the edge, to fight an epic battle, to experience in a very diluted way what previous generations must have felt as they prepared to go to war,” he wrote at the time. For his generation, he wrote, “instead of violence, unbridled capitalism has become the preferred vehicle for channeling their energy, intellect and aggression.”
  • his lack of social grace became an asset, according to Mr. Thiel and other former colleagues. He did not waste time on meetings that seemed pointless, and he bluntly insisted that the engineers whittle an eight-page PayPal registration process down to one.
  • he and Mr. Thiel now had a setting in which to try out their ideas about diversity and meritocracy. “In the start-up crucible, performing is all that matters,” Mr. Sacks wrote about that time. He wanted to give all job applicants tests of cognitive ability, according to his colleague Keith Rabois, and when the company searched for a new chief executive, one of the requirements was an I.Q. of 160 — genius level.
  • But those debates did a great deal for Mr. Sacks. After graduation, he and Mr. Thiel published “The Diversity Myth,” a book-length critique of Stanford’s efforts. Within a few more years, he, Mr. Thiel, Mr. Rabois and others had transformed themselves into a close-knit network of technology entrepreneurs — innovators who created billion-dollar business after billion-dollar business, using the ideas, ethos and group bonds they had honed at The Stanford Review.
  • intentionally or not, he stated something many people quietly believed: The same thing that made Silicon Valley phenomenally successful also kept it homogeneous, and start-ups had an almost inevitable like-with-like quality.
  • The kind of common ground shared by the early PayPal leaders “is always the critical ingredient on the founding teams,” Mr. Thiel said in an interview. “You have these great friendships that were built over some period of time. Silicon Valley flows out of deep relationships that people have built. That’s the structural reality.”
  • Less than 10 years after graduation, he and Mr. Thiel had been transformed from outcasts into favorites with a reputation for seeing the future. Far from the only libertarians in Silicon Valley, they had finally found an environment that meshed perfectly with their desire for unfettered competition and freedom from constraints. The money they made seemed like vindication of their ideas.
  • The success of the struggle to create PayPal, and its eventual sale price, gave the men a new power: the knowledge to create new companies and the ability to fund their own and one another’s. Billion-dollar start-ups had been rare. But in the next few years, the so-called PayPal Mafia went on to found seven companies that reached blockbuster scale, including YouTube, LinkedIn, Yelp and a business-messaging service called Yammer, founded by Mr. Sacks and sold a few years later to Microsoft for $1.2 billion.
  • Since 1999, the number of female partners in venture capital has declined by nearly half, from 10 percent to 6 percent, according to a recent Babson College study.
  • in early 2014, Ms. Vassallo was quietly let go. The firm was downsizing over all, especially in green technology, one of Ms. Vassallo’s specialties, and men were shown the exit as well. But in interviews, several former colleagues said it was far from an easy environment for women, with all-male outings and fierce internal competition for who got which board seat — meaning internal credit — for each company, not to mention a sexual discrimination lawsuit filed by a female junior partner, scheduled for trial in early 2015.
  • They also said that Ms. Vassallo, earnest and so technical that she started a robotics program at a local girls’ school, had not been as forceful, or as adept a politician, as some of her male peers.
  • Another woman from the class of 1994 was quoted in the Fortune article: Trae Vassallo, who was Traci Neist when she built the taco-eating machine all those years ago, attended Stanford Business School with Ms. Herrin and Ms. Bianchini, co-founded a mobile device company, and then joined Kleiner Perkins, a premier venture capital firm.
  • As classmates started conversations with greetings like “How’s your fund?” some of those who did not work in technology joked that they felt like chumps. The Stanford campus had gone computer science crazy, with the majority of students taking programming courses. A career in technology didn’t feel like a risk anymore — it felt like a wise bet, said Jennifer Widom, a programming professor turned engineering dean. Computer science “is a degree that guarantees you a future, regardless of what form you decide to take it in,” she said.
  • The nature of start-ups was shifting again, too, this time largely in women’s favor. From servers onward, many components could be inexpensively licensed instead of custom-built. Founders could turn to a multiplying array of investment sources, meaning they no longer had to be supplicants at a handful of male-run venture firms. The promise that the Internet would be a leveler was finally becoming a bit more fulfilled.
  • The frenzy had an unlikely effect on the some members of the Stanford Review group: They were becoming cheerleaders for women in technology, not for ideological reasons, but for market-based ones.
  • Like many others, he was finding that the biggest obstacle to starting new companies was a dearth of technical talent so severe they worried it would hinder innovation.
  • The real surprise of the reunion weekend, however, was that more of the women in the class of ’94 were finally becoming entrepreneurs, later and on a smaller scale than many of the men, but founders nonetheless.
  • The rhythms of their lives and the technology industry were finally clicking: Companies were becoming easier to start just as their children were becoming more self-sufficient, and they did not want to miss another chance.
runlai_jiang

Russian Influence Campaign Extracted Americans' Personal Data - WSJ - 1 views

  • That was in early 2017. It wasn’t until recently, after being contacted by The Wall Street Journal, that Ms. Hales would learn that Black4Black and “partner” groups, including BlackMattersUS, were among hundreds of Facebook and Instagram accounts set up by a pro-Kremlin propaganda agency to meddle in American politics, Facebook records show.
  • The fake directory is one example of the elaborate schemes that Russian “trolls” have pursued to try to collect personal and business information from Americans, the Journal has found. Leveraging social media, Russians have collected data by peddling niche business directories, convincing activists to sign petitions and bankrolling self-defense training classes in return for student information.
  • which also owns Instagram, said the company allows users to find out whether they have “liked” or “followed” any Russia-backed accounts through an online tool..
  • ...8 more annotations...
  • It isn’t clear for what purpose the data were collected, but intelligence and cybersecurity experts say it could be used for identity theft or leveraged as part of a wider political-influence effort that didn’t end with the 2016 election
  • Russian operators used stolen American identities to open bank and PayPal accounts, create fake driver’s licenses, post messages online and buy political advertisements before the 2016 election, according to the indictment.
  • Another Russian group, “Don’t Shoot,” identified as Russia-linked in congressional hearings last fall, appeared to collect information by asking followers to sign petitions and report police misconduct on its website, DoNotShoot.us.
  • The operators allegedly kept a list of more than 100 Americans and their political views to “monitor recruitment efforts,
  • Their targets included niche groups ranging from Texas secessionists and “Southern heritage” proponents to the lesbian, gay, bisexual and transgender community and the Black Lives Matter movement.
  • Black4Black and its partner account BlackMattersUS, which had hundreds of thousands of followers on social media, asked the American entrepreneurs to answer detailed questions so it could write articles promoting their companies. More than a dozen entrepreneurs contacted by the Journal said they turned over data to participate in the directory, yet none reported gaining any new customers.
  • However, the tool doesn’t notify users who exchanged messages with or turned over information to the accounts.
  • “We’re all just trying to make an honest living here,” said Ms. Hales, the business owner from Cleveland. “I would feel comfortable knowing that whoever’s behind this and whatever information they were pursuing has been shut down.”
  •  
    Facebook and other activists Social Media accounts like Black4black and Donotshoot.us are revealed to associate with Russian Operator to steal personal information and political inclination for manipulating election, stealing bank and Paypal accounts and create....
Javier E

The American retirement system is built for the rich - The Washington Post - 0 views

  • While loudly and proudly proclaiming that their goal is to nurture nest eggs for the working class, lawmakers have constructed a complex of tax shelters for the well-to-do. The lopsided result is that as of 2019, nearly 29,000 taxpayers had amassed “mega-IRAs” — individual retirement accounts with balances of $5 million or more — while half of American households had no retirement accounts at all.
  • according to the Congressional Budget Office, the top 10th of households reap a larger share of the income tax subsidy for retirement savings than the bottom 80 percent.
  • It’s working out just fine for the financial institutions that manage assets in IRAs and 401(k)s. The combined amount in those vehicles reached $21.6 trillion at the end of 2021 — up fivefold since 2000 — and the more money that pours in, the more that managers collect in fees
  • ...22 more annotations...
  • University of Virginia law professor Michael Doran — who held tax policy roles at the Treasury Department under Presidents Bill Clinton and George W. Bush — calls the current state of affairs “the great American retirement fraud.”
  • Secure 2.0 would take the fraud to a new level: Its congressional supporters have engaged in Enron-style accounting gimmicks to mask the bill’s effects on deficit
  • from the outset, IRAs were a generous gift to the upper class. At the time, very few low- and middle-income individuals could afford to stash $1,500 in a retirement account each year — median income for U.S. households was $11,100 in 1974 — so the people taking full advantage of the new IRAs tended to be relatively rich
  • since the benefit was structured as a deduction, it was worth more to taxpayers in higher income brackets.
  • In the nearly half-century since, Congress has continually expanded the amount that individuals can pour into tax-deferred savings accounts.
  • Now, the JCT estimates that 401(k)s and other similar defined-contribution plans cost the federal government $200 billion per year.
  • individuals can contribute up to $6,000 per year to an IRA ($7,000 if age 50 or older), plus $20,500 to a 401(k) ($27,000 for 50-year-olds and up), with their employers potentially chipping in to bring the 401(k) total to $61,000 ($67,500 for the over-50 set).
  • In 2018, the most recent year for which data is available, 58 percent of taxpayers with wage income made no contribution to 401(k)-style plans, and less than 4 percent bumped up against the contribution cap.
  • As of 2020, approximately 63 percent of U.S. households had no such accounts.
  • (The very largest IRAs, like PayPal co-founder Peter Thiel’s reported $5 billion account, result from a different loophole: the ability of founders and early-stage investors to stuff IRAs with start-up stock
  • When JCT released data last summer showing that 28,615 taxpayers had accumulated $5 million or more in IRAs, lawmakers cried foul. Rep. Richard Neal (D-Mass.), who as chairman of the Ways and Means Committee is the top tax writer in the House, lamented the “exploitation” of IRAs. “IRAs are intended to help Americans achieve long-term financial security, not to enable those who already have extraordinary wealth to avoid paying their fair share in taxes,”
  • I calculated that an individual who made the maximum 401(k) contributions since 1990, investing exclusively in an S&P 500 index fund, would have more than $7 million in her account today.
  • Forbes revealed more than a decade ago that Thiel and another PayPal co-founder were using their IRAs to shelter entrepreneurial earnings; the Government Accountability Office flagged the IRA-stuffing phenomenon in 2014; and rather than clamping down, lawmakers from both parties sat on their hands.)
  • The Secure 2.0 bill, sponsored by Neal, doubles down on the inequities of the status quo. It will inevitably result in even more of the mega-IRAs that Neal and other Democrats decry.
  • Under current law, taxpayers must begin to take withdrawals from their 401(k)s and traditional IRAs at age 72. (It had been 70½ before Secure 1.0, signed into law by President Donald Trump in 2019, raised the age by a year and a half.
  • Secure 2.0 would bump that up to age 75. The change would mean that taxpayers with supersize IRAs could enjoy three extra years of tax-free growth before they needed to take money out
  • Lower-income retirees wouldn’t benefit because they don’t have the luxury of holding off on withdrawals, which they need to cover living expenses.
  • Another provision would lift the cap on 401(k) catch-up contributions at ages 62, 63 and 64 from $6,500 to $10,000. Factoring in employer matching contributions, that would raise the maximum 401(k) inflow to $71,000 per year.
  • if lawmakers were genuinely concerned about retirement security for people who need it, they wouldn’t start by aiding taxpayers who can afford to save more each year than most Americans earn. The higher limit on catch-up contributions will simply allow high-income taxpayers to race further ahead.
  • The top-weighted benefits of Secure 2.0 might be tolerable if they were offset by other tax increases on the rich — if this were all just moving money from one deep pocket to another. But the items audaciously labeled as “revenue provisions” in the bill generate revenue as real as Monopoly money.
  • The Rothification provisions in Secure 2.0 bring $35 billion of revenue into the 10-year window — ostensibly offsetting the cost of the bill’s giveaways — but the $35 billion is pure make-believe: It comes at the expense of an equivalent amount of revenue down the road.
  • If lawmakers from either party were truly concerned about the plight of low-income retirees, they would focus on strengthening Social Security, which actually provides a safety net for older people, rather than adding more deficit-financed bells and whistles to retirement accounts for the rich.
Javier E

Silicon Valley Worries About Addiction to Devices - NYTimes.com - 1 views

  • founders from Facebook, Twitter, eBay, Zynga and PayPal, and executives and managers from companies like Google, Microsoft, Cisco and others listened to or participated
  • they debated whether technology firms had a responsibility to consider their collective power to lure consumers to games or activities that waste time or distract them.
  • Eric Schiermeyer, a co-founder of Zynga, an online game company and maker of huge hits like FarmVille, has said he has helped addict millions of people to dopamine, a neurochemical that has been shown to be released by pleasurable activities, including video game playing, but also is understood to play a major role in the cycle of addiction. But what he said he believed was that people already craved dopamine and that Silicon Valley was no more responsible for creating irresistible technologies than, say, fast-food restaurants were responsible for making food with such wide appeal. “They’d say: ‘Do we have any responsibility for the fact people are getting fat?’ Most people would say ‘no,’ ” said Mr. Schiermeyer. He added: “Given that we’re human, we already want dopamine.”
  • ...4 more annotations...
  • the Facebook executive, said his primary concern was that people live balanced lives. At the same time, he acknowledges that the message can run counter to Facebook’s business model, which encourages people to spend more time online. “I see the paradox,” he said.
  • “The responsibility we have is to put the most powerful capability into the world,” he said. “We do it with eyes wide open that some harm will be done. Someone might say, ‘Why not do so in a way that causes no harm?’ That’s naïve.” “The alternative is to put less powerful capability in people’s hands and that’s a bad trade-off,” he added.
  • she believed that interactive gadgets could create a persistent sense of emergency by setting off stress systems in the brain — a view that she said was becoming more widely accepted. “It’s this basic cultural recognition that people have a pathological relationship with their devices,” she said. “People feel not just addicted, but trapped.”
  • Richard Fernandez, an executive coach at Google and one of the leaders of the mindfulness movement, said the risks of being overly engaged with devices were immense.
  •  
    First, I would like to point out that I read this article while distracted from my Extended Essay. Paradoxical, I know. The article points out many of the negative qualities of the glamorous lure of the internet. I found it interesting that internet usage can actually stimulate the production of dopamine in the body, making our attraction to the computer both a "high" and a true addiction. Not only have I seen an increasing number of articles on this topic in the past week or so, but I have also seen a rather large amount of articles pertaining to the "depression inducing" effect of the internet. Drawing from another article, logging onto social networking sites automatically bombards us with pictures of people we know having fun. Their pictures are beautiful and their status updates are witty. It is easy to see all of this content and immediately think something along the lines of: "why isn't my life that fun/hilarious/exciting?" or "I really wish that I were vacationing in Bora Bora". What we need to remember is that people choose what goes online, and, therefore, only choose to share the most glamourous sides of their lives. And let's not forget that those pictures most likely have at least one or more Instagram filter on them...
Javier E

Silicon Valley's Youth Problem - NYTimes.com - 0 views

  • : Why do these smart, quantitatively trained engineers, who could help cure cancer or fix healthcare.gov, want to work for a sexting app?
  • But things are changing. Technology as service is being interpreted in more and more creative ways: Companies like Uber and Airbnb, while properly classified as interfaces and marketplaces, are really providing the most elevated service of all — that of doing it ourselves.
  • All varieties of ambition head to Silicon Valley now — it can no longer be designated the sole domain of nerds like Steve Wozniak or even successor nerds like Mark Zuckerberg. The face of web tech today could easily be a designer, like Brian Chesky at Airbnb, or a magazine editor, like Jeff Koyen at Assignmint. Such entrepreneurs come from backgrounds outside computer science and are likely to think of their companies in terms more grandiose than their technical components
  • ...18 more annotations...
  • Intel, founded by Gordon Moore and Robert Noyce, both physicists, began by building memory chips that were twice as fast as old ones. Sun Microsystems introduced a new kind of modular computer system, built by one of its founders, Andy Bechtolsheim. Their “big ideas” were expressed in physical products and grew out of their own technical expertise. In that light, Meraki, which came from Biswas’s work at M.I.T., can be seen as having its origins in the old guard. And it followed what was for decades the highway that connected academia to industry: Grad students researched technology, powerful advisers brokered deals, students dropped out to parlay their technologies into proprietary solutions, everyone reaped the profits. That implicit guarantee of academia’s place in entrepreneurship has since disappeared. Graduate students still drop out, but to start bike-sharing apps and become data scientists. That is, if they even make it to graduate school. The success of self-educated savants like Sean Parker, who founded Napster and became Facebook’s first president with no college education to speak of, set the template. Enstitute, a two-year apprenticeship, embeds high-school graduates in plum tech positions. Thiel Fellowships, financed by the PayPal co-founder and Facebook investor Peter Thiel, give $100,000 to people under 20 to forgo college and work on projects of their choosing.
  • Much of this precocity — or dilettantism, depending on your point of view — has been enabled by web technologies, by easy-to-use programming frameworks like Ruby on Rails and Node.js and by the explosion of application programming interfaces (A.P.I.s) that supply off-the-shelf solutions to entrepreneurs who used to have to write all their own code for features like a login system or an embedded map. Now anyone can do it, thanks to the Facebook login A.P.I. or the Google Maps A.P.I.
  • One of the more enterprising examples of these kinds of interfaces is the start-up Stripe, which sells A.P.I.s that enable businesses to process online payments. When Meraki first looked into taking credit cards online, according to Biswas, it was a monthslong project fraught with decisions about security and cryptography. “Now, with Stripe, it takes five minutes,” he said. “When you combine that with the ability to get a server in five minutes, with Rails and Twitter Bootstrap, you see that it has become infinitely easier for four people to get a start-up off the ground.”
  • The sense that it is no longer necessary to have particularly deep domain knowledge before founding your own start-up is real; that and the willingness of venture capitalists to finance Mark Zuckerberg look-alikes are changing the landscape of tech products. There are more platforms, more websites, more pat solutions to serious problems
  • There’s a glass-half-full way of looking at this, of course: Tech hasn’t been pedestrianized — it’s been democratized. The doors to start-up-dom have been thrown wide open. At Harvard, enrollment in the introductory computer-science course, CS50, has soared
  • many of the hottest web start-ups are not novel, at least not in the sense that Apple’s Macintosh or Intel’s 4004 microprocessor were. The arc of tech parallels the arc from manufacturing to services. The Macintosh and the microprocessor were manufactured products. Some of the most celebrated innovations in technology have been manufactured products — the router, the graphics card, the floppy disk
  • One of Stripe’s founders rowed five seat in the boat I coxed freshman year in college; the other is his older brother. Among the employee profiles posted on its website, I count three of my former teaching fellows, a hiking leader, two crushes. Silicon Valley is an order of magnitude bigger than it was 30 years ago, but still, the start-up world is intimate and clubby, with top talent marshaled at elite universities and behemoths like Facebook and Google.
  • A few weeks ago, a programmer friend and I were talking about unhappiness, in particular the kind of unhappiness that arises when you are 21 and lavishly educated with the world at your feet. In the valley, it’s generally brought on by one of two causes: coming to the realization either that your start-up is completely trivial or that there are people your own age so knowledgeable and skilled that you may never catch up.
  • The latter source of frustration is the phenomenon of “the 10X engineer,” an engineer who is 10 times more productive than average. It’s a term that in its cockiness captures much of what’s good, bad and impossible about the valley. At the start-ups I visit, Friday afternoons devolve into bouts of boozing and Nerf-gun wars. Signing bonuses at Facebook are rumored to reach the six digits. In a landscape where a product may morph several times over the course of a funding round, talent — and the ability to attract it — has become one of the few stable metrics.
  • there is a surprising amount of angst in Silicon Valley. Which is probably inevitable when you put thousands of ambitious, talented young people together and tell them they’re god’s gift to technology. It’s the angst of an early hire at a start-up that only he realizes is failing; the angst of a founder who raises $5 million for his company and then finds out an acquaintance from college raised $10 million; the angst of someone who makes $100,000 at 22 but is still afraid that he may not be able to afford a house like the one he grew up in.
  • San Francisco, which is steadily stealing the South Bay’s thunder. (“Sometime in the last two years, the epicenter of consumer technology in Silicon Valley has moved from University Ave. to SoMa,” Terrence Rohan, a venture capitalist at Index Ventures, told me
  • Both the geographic shift north and the increasingly short product cycles are things Jim attributes to the rise of Amazon Web Services (A.W.S.), a collection of servers owned and managed by Amazon that hosts data for nearly every start-up in the latest web ecosystem.Continue reading the main story
  • now, every start-up is A.W.S. only, so there are no servers to kick, no fabs to be near. You can work anywhere. The idea that all you need is your laptop and Wi-Fi, and you can be doing anything — that’s an A.W.S.-driven invention.”
  • This same freedom from a physical location or, for that matter, physical products has led to new work structures. There are no longer hectic six-week stretches that culminate in a release day followed by a lull. Every day is release day. You roll out new code continuously, and it’s this cycle that enables companies like Facebook, as its motto goes, to “move fast and break things.”
  • Part of the answer, I think, lies in the excitement I’ve been hinting at. Another part is prestige. Smart kids want to work for a sexting app because other smart kids want to work for the same sexting app. “Highly concentrated pools of top talent are one of the rarest things you can find,” Biswas told me, “and I think people are really attracted to those environments.
  • These days, a new college graduate arriving in the valley is merely stepping into his existing network. He will have friends from summer internships, friends from school, friends from the ever-increasing collection of incubators and fellowships.
  • As tech valuations rise to truly crazy levels, the ramifications, financial and otherwise, of a job at a pre-I.P.O. company like Dropbox or even post-I.P.O. companies like Twitter are frequently life-changing. Getting these job offers depends almost exclusively on the candidate’s performance in a series of technical interviews, where you are asked, in front of frowning hiring managers, to whip up correct and efficient code.
  • Moreover, a majority of questions seem to be pulled from undergraduate algorithms and data-structures textbooks, which older engineers may have not laid eyes on for years.
Javier E

Tech Billionaires Want to Destroy the Universe - The Atlantic - 0 views

  • “Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer; two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.”
  • Ignore for a moment any objections you might have to the simulation hypothesis, and everything impractical about the idea that we could somehow break out of reality, and think about what these people are trying to do.
  • The two billionaires (Elon Musk is a prime suspect) are convinced that they’ll emerge out of this drab illusion into a more shining reality, lit by a brighter and more beautiful star. But for the rest of us the experience would be very different—you lose your home, you lose your family, you lose your life and your body and everything around you
  • ...17 more annotations...
  • Every summer we watch dozens of villains plotting to blow up the entire universe, but the motivations are always hazy. Why, exactly, does the baddie want to destroy everything again? Now we know.
  • It’s not just Elon Musk, who stated that ‘there’s a one in a billion chance we’re living in base reality,’ who believes this—in an extraordinary piece of hedge-betting, the Bank of America has judiciously announced that the probability that waking life is just an illusion is, oh, about fifty-fifty
  • Tech products no longer feel like something offered to the public, but something imposed: The great visionary looks at the way everyone is doing something, and decides, single-handedly, to change it.
  • once social reality is the exclusive property of a few geegaw-tinkerers, why shouldn’t physical reality be next? With Google’s Calico seeking hedge-fund investment for human immortality and the Transformative Technology Lab hoping to externalize human consciousness, the tech industry is moving into territory once cordoned off for the occult. Why shouldn’t the fate of the entire cosmos be in the hands of programmers hiding from the California sun, to keep or destroy as they wish?
  • Unsurprisingly, nobody bothered to ask us whether we want the end of the world or not; they’re just setting about trying to do it. Silicon Valley works by solving problems that hadn’t heretofore existed; its culture is pathologically fixated on the notion of ‘disruption.’
  • Its real antecedents are the Gnostics, an early Christian sect who believed that the physical universe was the creation of the demiurge, Samael or Ialdaboath, sometimes figured as a snake with the head of a lion, a blind and stupid god who creates his false world in imperfect imitation of the real Creator. This world is a distorted mirror, an image; in other words, a kind of software.
  • Kabbalist mysticists, Descartes with his deceiving demon, and Zhuangzi in his butterfly dream have all questioned the reality of their sense-experiences, but this isn’t a private, solipsistic hallucination; in the simulation hypothesis, reality is a prison for all of us
  • there’s always been the lingering suspicion that our reality is somehow unreal—it’s just that what we once thought about in terms of dreams and magic, cosmic minds or whispering devils, is now expressed through boring old computers, that piece of clunky hardware that waits predatory on your desk every morning to code the finest details of your life.
  • The Gnostics were often accused by other early Christians of Satanism, and they might have had a point: Many identified the jealous, petty, prurient God of the Old Testament with the Demiurge, while sects such as the Ophites revered the serpent in the Garden of Eden as the first to offer knowledge to humanity, freeing them from their first cage
  • In his book, Baudrillard also talks about virtual realities and deceptive images, but his point isn’t that they have clouded our perception of the reality beyond. The present system of social images is so vast and all-encompassing that it’s produced a total reality for itself; it only lies when it has us thinking that there’s something else behind the façade. Baudrillard, always something of an overgrown child, loved to refer to Disneyland: As he pointed out, it’s in no way a fake—when you leave its gates, you return to an America that’s just one giant Disneyland, a copy without an original, from coast to coast
  • ‘The simulacrum is never that which conceals the truth—it is the truth which conceals that there is none.
  • Digital and cinematic media actively construct our experience of reality. The world of film stars and theme parks, social media and supermarket shelves designed to look like something out of an old-time grocery—this is the one we live in. Our Silicon Valley Satanists have made a very questionable assumption: What if there’s nowhere to break out into?
  • the virtual is also real. Why is a universe composed of software necessarily any less real than one composed of matter? Computer simulation is of course only a metaphor, a new-ish way of describing what was once expressed in oneiric or theological terms. They can’t really mean that our universe was built in something similar to the machine you’re using to read these words right now;
  • simulation is a process independent of whatever divine or technological apparatus is used to achieve it. The real argument is that, by some unknown mechanism, what we see is only a function of what really exists. But we’ve known since Kant that our sense-perception can never give us a full account of the material world; all this can be said of any conceivable reality
  • Outside the simulation hypothesis there are scientists who propose that our universe is a single black hole, with what we perceive as matter being a hologram emerging from a two-dimensional ring of information along its event horizon; there are mathematical Platonists who, following Max Tegmark, consider the world to be a set of abstract mathematical objects, of which physical objects are a crude epiphenomenon. If matter doesn’t ‘really’ exist, there’s no need for anything to be rooted anywhere; we might live suspended in a looping chain of simulations and appearances that coils back on itself and never has to touch the ground
  • Elon Musk and his co-religionists aren’t actually blinded by artifice; they’re fixated on a strange and outdated notion that somewhere, there has to be a concrete reality—they’ve just decided that it’s not this one
  • What’s far more worrying is the fact that the people who want to destroy the only world we really have are also the people increasingly in charge of it.
Javier E

How Uber Got Lost - The New York Times - 0 views

  • The most vaunted title in Silicon Valley is, has been, and ever will be “founder.” It’s less of a title than a statement. “I made this,” the founder proclaims. “I invented it out of nothing. I conjured it into being.”
  • If this sounds messianic, that’s because it is. Founder culture — or more accurately, founder worship — emerged as bedrock faith in Silicon Valley from several strains of quasi-religious philosophy
  • 1960s-era San Francisco embraced a sexual, chemical, hippie-led revolution inspired by dreams of liberated consciousness and utopian communities.
  • ...3 more annotations...
  • This anti-establishment counterculture mixed surprisingly well with emerging ideas about the efficiency of individual greed and the gospel of creative destruction.
  • Over the decades, the ethos informed the creation of ventures like Apple, Netscape, PayPal — and Uber.
  • By 2009, when the company was founded, Silicon Valley saw a willingness to bend — and even break — the rules not as an unfortunate trait, but as a sign of a promising entrepreneur with a bright future
Javier E

Elon Musk Has the World's Strangest Social Calendar - The New York Times - 0 views

  • They describe someone whose closest friendships (many of them longstanding) are with other wealthy tech luminaries of middle age.
  • He regularly takes meetings until 9 or 10 p.m., but when he goes out, he does so with frenetic bombast, almost as if live-action role-playing a billionaire playboy
  • A fan of lavish costume parties, Mr. Musk revels in settings, like the desert art festival/rave Burning Man, where he can take on a role outside himself.
  • ...7 more annotations...
  • Mr. Musk favors intense, one-on-one conversations — one person described a party conversation with him for 90 unbroken minutes about astrophysics.
  • Mr. Musk once acknowledged in an interview with Axel Springer’s chief executive, Mathias Döpfner, that he gets lonely; in a 2017 interview with Rolling Stone, he said that as a child he vowed to never be alone.
  • One obvious way that he staves off loneliness is using Twitter. Mr. Musk, who frequently responds to the many Regular Joe accounts that tweet at him, uses the service almost every day, in a way that suggests the website is an outlet not just for his ideas but for his emotions.
  • “I spent almost every day with Elon for five years — apart from family time, he spends nearly every waking hour working,” Mr. Teller said. “If your idea of fun is a long weekend of rocket engineering in a humid, sparsely populated corner of South Texas, then you should be jealous of Elon’s social life.”
  • Many of his closest friends are longtime investors in his companies and share his technical worldview and his geeky preoccupations. Mostly in their 40s and 50s, these friends often see Mr. Musk at quiet dinners in the private back rooms of restaurants — low-key affairs in which the conversation turns to subjects like science fiction or World War II fighter planes.
  • ebecca Eisenberg, a lawyer in Palo Alto, Calif., who was senior counsel at PayPal from 2001 to 2007, was catching up with Mr. Thiel, she said, when Mr. Musk broke into the conversation. According to Ms. Eisenberg, Mr. Musk expressed his opinion that China was likely to invade Taiwan and that the American workers at a new Taiwan-owned chip factory in Arizona would never be as skillful as their Taiwan counterparts. Mr. Thiel, meanwhile, was largely quiet.
  • “I have two teenagers and four pets,” Ms. Eisenberg said. “It seemed like Peter was the dominant dog, and Elon was trying to impress him.”
Javier E

Peter Thiel Is Taking a Break From Democracy - The Atlantic - 0 views

  • Thiel’s unique role in the American political ecosystem. He is the techiest of tech evangelists, the purest distillation of Silicon Valley’s reigning ethos. As such, he has become the embodiment of a strain of thinking that is pronounced—and growing—among tech founders.
  • why does he want to cut off politicians
  • But the days when great men could achieve great things in government are gone, Thiel believes. He disdains what the federal apparatus has become: rule-bound, stifling of innovation, a “senile, central-left regime.”
  • ...95 more annotations...
  • Peter Thiel has lost interest in democracy.
  • Thiel has cultivated an image as a man of ideas, an intellectual who studied philosophy with René Girard and owns first editions of Leo Strauss in English and German. Trump quite obviously did not share these interests, or Thiel’s libertarian principles.
  • For years, Thiel had been saying that he generally favored the more pessimistic candidate in any presidential race because “if you’re too optimistic, it just shows you’re out of touch.” He scorned the rote optimism of politicians who, echoing Ronald Reagan, portrayed America as a shining city on a hill. Trump’s America, by contrast, was a broken landscape, under siege.
  • Thiel is not against government in principle, his friend Auren Hoffman (who is no relation to Reid) says. “The ’30s, ’40s, and ’50s—which had massive, crazy amounts of power—he admires because it was effective. We built the Hoover Dam. We did the Manhattan Project,” Hoffman told me. “We started the space program.”
  • Their failure to make the world conform to his vision has soured him on the entire enterprise—to the point where he no longer thinks it matters very much who wins the next election.
  • His libertarian critique of American government has curdled into an almost nihilistic impulse to demolish it.
  • “Voting for Trump was like a not very articulate scream for help,” Thiel told me. He fantasized that Trump’s election would somehow force a national reckoning. He believed somebody needed to tear things down—slash regulations, crush the administrative state—before the country could rebuild.
  • He admits now that it was a bad bet.
  • “There are a lot of things I got wrong,” he said. “It was crazier than I thought. It was more dangerous than I thought. They couldn’t get the most basic pieces of the government to work. So that was—I think that part was maybe worse than even my low expectations.”
  • eid Hoffman, who has known Thiel since college, long ago noticed a pattern in his old friend’s way of thinking. Time after time, Thiel would espouse grandiose, utopian hopes that failed to materialize, leaving him “kind of furious or angry” about the world’s unwillingness to bend to whatever vision was possessing him at the moment
  • Thiel. He is worth between $4 billion and $9 billion. He lives with his husband and two children in a glass palace in Bel Air that has nine bedrooms and a 90-foot infinity pool. He is a titan of Silicon Valley and a conservative kingmaker.
  • “Peter tends to be not ‘glass is half empty’ but ‘glass is fully empty,’” Hoffman told me.
  • he tells the story of his life as a series of disheartening setbacks.
  • He met Mark Zuckerberg, liked what he heard, and became Facebook’s first outside investor. Half a million dollars bought him 10 percent of the company, most of which he cashed out for about $1 billion in 2012.
  • Thiel made some poor investments, losing enormous sums by going long on the stock market in 2008, when it nose-dived, and then shorting the market in 2009, when it rallied
  • on the whole, he has done exceptionally well. Alex Karp, his Palantir co-founder, who agrees with Thiel on very little other than business, calls him “the world’s best venture investor.”
  • Thiel told me this is indeed his ambition, and he hinted that he may have achieved it.
  • He longs for radical new technologies and scientific advances on a scale most of us can hardly imagine
  • He longs for a world in which great men are free to work their will on society, unconstrained by government or regulation or “redistributionist economics” that would impinge on their wealth and power—or any obligation, really, to the rest of humanity
  • Did his dream of eternal life trace to The Lord of the Rings?
  • He takes for granted that this kind of progress will redound to the benefit of society at large.
  • More than anything, he longs to live forever.
  • Calling death a law of nature is, in his view, just an excuse for giving up. “It’s something we are told that demotivates us from trying harder,”
  • Thiel grew up reading a great deal of science fiction and fantasy—Heinlein, Asimov, Clarke. But especially Tolkien; he has said that he read the Lord of the Rings trilogy at least 10 times. Tolkien’s influence on his worldview is obvious: Middle-earth is an arena of struggle for ultimate power, largely without government, where extraordinary individuals rise to fulfill their destinies. Also, there are immortal elves who live apart from men in a magical sheltered valley.
  • But his dreams have always been much, much bigger than that.
  • Yes, Thiel said, perking up. “There are all these ways where trying to live unnaturally long goes haywire” in Tolkien’s works. But you also have the elves.
  • How are the elves different from the humans in Tolkien? And they’re basically—I think the main difference is just, they’re humans that don’t die.”
  • During college, he co-founded The Stanford Review, gleefully throwing bombs at identity politics and the university’s diversity-minded reform of the curriculum. He co-wrote The Diversity Myth in 1995, a treatise against what he recently called the “craziness and silliness and stupidity and wickedness” of the left.
  • Thiel laid out a plan, for himself and others, “to find an escape from politics in all its forms.” He wanted to create new spaces for personal freedom that governments could not reach
  • But something changed for Thiel in 2009
  • he people, he concluded, could not be trusted with important decisions. “I no longer believe that freedom and democracy are compatible,” he wrote.
  • ven more notable one followed: “Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”
  • By 2015, six years after declaring his intent to change the world from the private sector, Thiel began having second thoughts. He cut off funding for the Seasteading Institute—years of talk had yielded no practical progress–and turned to other forms of escape
  • The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom,” he wrote. His manifesto has since become legendary in Silicon Valley, where his worldview is shared by other powerful men (and men hoping to be Peter Thiel).
  • Thiel’s investment in cryptocurrencies, like his founding vision at PayPal, aimed to foster a new kind of money “free from all government control and dilution
  • His decision to rescue Elon Musk’s struggling SpaceX in 2008—with a $20 million infusion that kept the company alive after three botched rocket launches—came with aspirations to promote space as an open frontier with “limitless possibility for escape from world politics
  • It was seasteading that became Thiel’s great philanthropic cause in the late aughts and early 2010s. The idea was to create autonomous microstates on platforms in international waters.
  • “There’s zero chance Peter Thiel would live on Sealand,” he said, noting that Thiel likes his comforts too much. (Thiel has mansions around the world and a private jet. Seal performed at his 2017 wedding, at the Belvedere Museum in Vienna.)
  • As he built his companies and grew rich, he began pouring money into political causes and candidates—libertarian groups such as the Endorse Liberty super PAC, in addition to a wide range of conservative Republicans, including Senators Orrin Hatch and Ted Cruz
  • Sam Altman, the former venture capitalist and now CEO of OpenAI, revealed in 2016 that in the event of global catastrophe, he and Thiel planned to wait it out in Thiel’s New Zealand hideaway.
  • When I asked Thiel about that scenario, he seemed embarrassed and deflected the question. He did not remember the arrangement as Altman did, he said. “Even framing it that way, though, makes it sound so ridiculous,” he told me. “If there is a real end of the world, there is no place to go.”
  • You’d have eco farming. You’d turn the deserts into arable land. There were sort of all these incredible things that people thought would happen in the ’50s and ’60s and they would sort of transform the world.”
  • None of that came to pass. Even science fiction turned hopeless—nowadays, you get nothing but dystopias
  • He hungered for advances in the world of atoms, not the world of bits.
  • Founders Fund, the venture-capital firm he established in 200
  • The fund, therefore, would invest in smart people solving hard problems “that really have the potential to change the world.”
  • This was not what Thiel wanted to be doing with his time. Bodegas and dog food were making him money, apparently, but he had set out to invest in transformational technology that would advance the state of human civilization.
  • He told me that he no longer dwells on democracy’s flaws, because he believes we Americans don’t have one. “We are not a democracy; we’re a republic,” he said. “We’re not even a republic; we’re a constitutional republic.”
  • “It was harder than it looked,” Thiel said. “I’m not actually involved in enough companies that are growing a lot, that are taking our civilization to the next level.”
  • Founders Fund has holdings in artificial intelligence, biotech, space exploration, and other cutting-edge fields. What bothers Thiel is that his companies are not taking enough big swings at big problems, or that they are striking out.
  • In at least 20 hours of logged face-to-face meetings with Buma, Thiel reported on what he believed to be a Chinese effort to take over a large venture-capital firm, discussed Russian involvement in Silicon Valley, and suggested that Jeffrey Epstein—a man he had met several times—was an Israeli intelligence operative. (Thiel told me he thinks Epstein “was probably entangled with Israeli military intelligence” but was more involved with “the U.S. deep state.”)
  • Buma, according to a source who has seen his reports, once asked Thiel why some of the extremely rich seemed so open to contacts with foreign governments. “And he said that they’re bored,” this source said. “‘They’re bored.’ And I actually believe it. I think it’s that simple. I think they’re just bored billionaires.”
  • he has a sculpture that resembles a three-dimensional game board. Ascent: Above the Nation State Board Game Display Prototype is the New Zealander artist Simon Denny’s attempt to map Thiel’s ideological universe. The board features a landscape in the aesthetic of Dungeons & Dragons, thick with monsters and knights and castles. The monsters include an ogre labeled “Monetary Policy.” Near the center is a hero figure, recognizable as Thiel. He tilts against a lion and a dragon, holding a shield and longbow. The lion is labeled “Fair Elections.” The dragon is labeled “Democracy.” The Thiel figure is trying to kill them.
  • When I asked Thiel to explain his views on democracy, he dodged the question. “I always wonder whether people like you … use the word democracy when you like the results people have and use the word populism when you don’t like the results,” he told me. “If I’m characterized as more pro-populist than the elitist Atlantic is, then, in that sense, I’m more pro-democratic.”
  • “I couldn’t find them,” he said. “I couldn’t get enough of them to work.
  • He said he has no wish to change the American form of government, and then amended himself: “Or, you know, I don’t think it’s realistic for it to be radically changed.” Which is not at all the same thing.
  • When I asked what he thinks of Yarvin’s autocratic agenda, Thiel offered objections that sounded not so much principled as practical.
  • “I don’t think it’s going to work. I think it will look like Xi in China or Putin in Russia,” Thiel said, meaning a malign dictatorship. “It ultimately I don’t think will even be accelerationist on the science and technology side, to say nothing of what it will do for individual rights, civil liberties, things of that sort.”
  • Still, Thiel considers Yarvin an “interesting and powerful” historian
  • he always talks about is the New Deal and FDR in the 1930s and 1940s,” Thiel said. “And the heterodox take is that it was sort of a light form of fascism in the United States.”
  • Yarvin, Thiel said, argues that “you should embrace this sort of light form of fascism, and we should have a president who’s like FDR again.”
  • Did Thiel agree with Yarvin’s vision of fascism as a desirable governing model? Again, he dodged the question.
  • “That’s not a realistic political program,” he said, refusing to be drawn any further.
  • ooking back on Trump’s years in office, Thiel walked a careful line.
  • A number of things were said and done that Thiel did not approve of. Mistakes were made. But Thiel was not going to refashion himself a Never Trumper in retrospect.
  • “I have to somehow give the exact right answer, where it’s like, ‘Yeah, I’m somewhat disenchanted,’” he told me. “But throwing him totally under the bus? That’s like, you know—I’ll get yelled at by Mr. Trump. And if I don’t throw him under the bus, that’s—but—somehow, I have to get the tone exactly right.”
  • Thiel knew, because he had read some of my previous work, that I think Trump’s gravest offense against the republic was his attempt to overthrow the election. I asked how he thought about it.
  • “Look, I don’t think the election was stolen,” he said. But then he tried to turn the discussion to past elections that might have been wrongly decided. Bush-Gore in 2000, for instanc
  • He came back to Trump’s attempt to prevent the transfer of power. “I’ll agree with you that it was not helpful,” he said.
  • there is another piece of the story, which Thiel reluctantly agreed to discuss
  • Puck reported that Democratic operatives had been digging for dirt on Thiel since before the 2022 midterm elections, conducting opposition research into his personal life with the express purpose of driving him out of politic
  • Among other things, the operatives are said to have interviewed a young model named Jeff Thomas, who told them he was having an affair with Thiel, and encouraged Thomas to talk to Ryan Grim, a reporter for The Intercept. Grim did not publish a story during election season, as the opposition researchers hoped he would, but he wrote about Thiel’s affair in March, after Thomas died by suicide.
  • He deplored the dirt-digging operation, telling me in an email that “the nihilism afflicting American politics is even deeper than I knew.”
  • He also seemed bewildered by the passions he arouses on the left. “I don’t think they should hate me this much,”
  • he spoke at the closed-press event with a lot less nuance than he had in our interviews. His after-dinner remarks were full of easy applause lines and in-jokes mocking the left. Universities had become intellectual wastelands, obsessed with a meaningless quest for diversity, he told the crowd. The humanities writ large are “transparently ridiculous,” said the onetime philosophy major, and “there’s no real science going on” in the sciences, which have devolved into “the enforcement of very curious dogmas.”
  • “Diversity—it’s not enough to just hire the extras from the space-cantina scene in Star Wars,” he said, prompting laughter.
  • Nor did Thiel say what genuine diversity would mean. The quest for it, he said, is “very evil and it’s very silly.”
  • “the silliness is distracting us from very important things,” such as the threat to U.S. interests posed by the Chinese Communist Party.
  • “Whenever someone says ‘DEI,’” he exhorted the crowd, “just think ‘CCP.’”
  • Somebody asked, in the Q&A portion of the evening, whether Thiel thought the woke left was deliberately advancing Chinese Communist interests
  • “It’s always the difference between an agent and asset,” he said. “And an agent is someone who is working for the enemy in full mens rea. An asset is a useful idiot. So even if you ask the question ‘Is Bill Gates China’s top agent, or top asset, in the U.S.?’”—here the crowd started roaring—“does it really make a difference?”
  • About 10 years ago, Thiel told me, a fellow venture capitalist called to broach the question. Vinod Khosla, a co-founder of Sun Microsystems, had made the Giving Pledge a couple of years before. Would Thiel be willing to talk with Gates about doing the same?
  • Thiel feels that giving his billions away would be too much like admitting he had done something wrong to acquire them
  • He also lacked sympathy for the impulse to spread resources from the privileged to those in need. When I mentioned the terrible poverty and inequality around the world, he said, “I think there are enough people working on that.”
  • besides, a different cause moves him far more.
  • Should Thiel happen to die one day, best efforts notwithstanding, his arrangements with Alcor provide that a cryonics team will be standing by.
  • Then his body will be cooled to –196 degrees Celsius, the temperature of liquid nitrogen. After slipping into a double-walled, vacuum-insulated metal coffin, alongside (so far) 222 other corpsicles, “the patient is now protected from deterioration for theoretically thousands of years,” Alcor literature explains.
  • All that will be left for Thiel to do, entombed in this vault, is await the emergence of some future society that has the wherewithal and inclination to revive him. And then make his way in a world in which his skills and education and fabulous wealth may be worth nothing at all.
  • I wondered how much Thiel had thought through the implications for society of extreme longevity. The population would grow exponentially. Resources would not. Where would everyone live? What would they do for work? What would they eat and drink? Or—let’s face it—would a thousand-year life span be limited to men and women of extreme wealth?
  • “Well, I maybe self-serve,” he said, perhaps understating the point, “but I worry more about stagnation than about inequality.”
  • Thiel is not alone among his Silicon Valley peers in his obsession with immortality. Oracle’s Larry Ellison has described mortality as “incomprehensible.” Google’s Sergey Brin aspires to “cure death.” Dmitry Itskov, a leading tech entrepreneur in Russia, has said he hopes to live to 10,000.
  • . “I should be investing way more money into this stuff,” he told me. “I should be spending way more time on this.”
  • You haven’t told your husband? Wouldn’t you want him to sign up alongside you?“I mean, I will think about that,” he said, sounding rattled. “I will think—I have not thought about that.”
  • No matter how fervent his desire, Thiel’s extraordinary resources still can’t buy him the kind of “super-duper medical treatments” that would let him slip the grasp of death. It is, perhaps, his ultimate disappointment.
  • There are all these things I can’t do with my money,” Thiel said.
Javier E

How OnlyFans top earner Bryce Adams makes millions selling a sex fantasy - Washington Post - 0 views

  • In the American creator economy, no platform is quite as direct or effective as OnlyFans. Since launching in 2016, the subscription site known primarily for its explicit videos has become one of the most methodical, cash-rich and least known layers of the online-influencer industry, touching every social platform and, for some creators, unlocking a once-unimaginable level of wealth.
  • More than 3 million creators now post around the world on OnlyFans, which has 230 million subscribing “fans” — a global audience two-thirds the size of the United States itself
  • fans’ total payouts to creators soared last year to $5.5 billion — more than every online influencer in the United States earned from advertisers that year,
  • ...55 more annotations...
  • If OnlyFans’s creator earnings were taken as a whole, the company would rank around No. 90 on Forbes’s list of the biggest private companies in America by revenue, ahead of Twitter (now called X), Neiman Marcus Group, New Balance, Hard Rock International and Hallmark Cards.
  • Many creators now operate like independent media companies, with support staffs, growth strategies and promotional budgets, and work to apply the cold quantification and data analytics of online marketing to the creation of a fantasy life.
  • The subscription site has often been laughed off as a tabloid punchline, a bawdy corner of the internet where young, underpaid women (teachers, nurses, cops) sell nude photos, get found out and lose their jobs.
  • pressures to perform for a global audience; an internet that never forgets. “There is simply no room for naivety,” one said in a guide posted to Reddit’s r/CreatorsAdvice.
  • America’s social media giants for years have held up online virality as the ultimate goal, doling out measurements of followers, reactions and hearts with an unspoken promise: that internet love can translate into sponsorships and endorsement deals
  • But OnlyFans represents the creator economy at its most blatantly transactional — a place where viewers pay upfront for creators’ labor, and intimacy is just another unit of content to monetize.
  • The fast ascent of OnlyFans further spotlights how the internet has helped foster a new style of modern gig work that creators see as safe, remote and self-directed,
  • Creators’ nonchalance about the digital sex trade has fueled a broader debate about whether the site’s promotion of feminist autonomy is a facade: just a new class of techno-capitalism, selling the same patriarchal dream.
  • But OnlyFans increasingly has become the model for how a new generation of online creators gets paid. Influencers popular on mainstream sites use it to capitalize on the audiences they’ve spent years building. And OnlyFans creators have turned going viral on the big social networks into a marketing strategy, using Facebook, Twitter and TikTok as sales funnels for getting new viewers to subscribe.
  • many creators, she added, still find it uniquely alluring — a rational choice in an often-irrational environment for gender, work and power. “Why would I spend my day doing dirty, degrading, minimum-wage labor when I can do something that brings more money in and that I have a lot more control over?”
  • it is targeting major “growth regions” in Latin America, Europe and Australia. (The Mexican diver Diego Balleza said he is using his $15-a-month account to save up for next year’s Paris Olympics.)
  • “Does an accountant always enjoy their work? No. All work has pleasure and pain, and a lot of it is boring and annoying. Does that mean they’re being exploited?”
  • Adams’s operation is registered in state business records as a limited liability company and offers quarterly employee performance reviews and catered lunch. It also runs with factory-like efficiency, thanks largely to a system designed in-house to track millions of data points on customers and content and ensure every video is rigorously planned and optimized.
  • Since sending her first photo in 2021, Adams’s OnlyFans accounts have earned $16.5 million in sales, more than 1.4 million fans and more than 11 million “likes.” She now makes about $30,000 a day — more than most American small businesses — from subscriptions, video sales, messages and tips, half of which is pure profit
  • Adams’s team sees its business as one of harmless, destigmatized gratification, in which both sides get what they want. The buyers are swiped over in dating apps, widowed, divorced or bored, eager to pay for the illusion of intimacy with an otherwise unattainable match. And the sellers see themselves as not all that different from the influencers they watched growing up on YouTube, charging for parts of their lives they’d otherwise share for free.
  • “This is normal for my generation, you know?
  • “I can go on TikTok right now and see ten girls wearing the bare minimum of clothing just to get people to join their page. Why not go the extra step to make money off it?”
  • the job can be financially precarious and mentally taxing, demanding not just the technical labor of recording, editing, managing and marketing but also the physical and emotional labor of adopting a persona to keep clients feeling special and eager to spend.
  • enix International Limited, is based, the company said its sales grew from $238 million in 2019 to more than $5.5 billion last year.
  • Its international army of creators has also grown from 348,000 in 2019 to more than 3 million today — a tenfold increase.
  • The company paid its owner, the Ukrainian American venture capitalist Leonid Radvinsky, $338 million in dividends last year.)
  • portion of its creator base and 70 percent of its annual revenue
  • When Tim Stokely, a London-based operator of live-cam sex sites, founded OnlyFans with his brother in 2016, he framed it as a simple way to monetize the creators who were becoming the world’s new celebrities — the same online influencers, just with a payment button. In 2019, Stokely told Wired magazine that his site was like “a bolt-on to your existing social media,” in the same way “Uber is a bolt-on to your car.”
  • Before OnlyFans, pornography on the internet had been largely a top-down enterprise, with agents, producers, studios and other middlemen hoarding the profits of performers’ work. OnlyFans democratized that business model, letting the workers run the show: recording their own content, deciding their prices, selling it however they’d like and reaping the full reward.
  • The platform bans real-world prostitution, as well as extreme or illegal content, and requires everyone who shows up on camera to verify they’re 18 or older by sending in a video selfie showing them holding a government-issued ID.
  • OnlyFans operates as a neutral marketplace, with no ads, trending topics or recommendation algorithms, placing few limitations on what creators can sell but also making it necessary for them to market themselves or fade away.
  • After sending other creators’ agents their money over PayPal, Adams’s ad workers send suggestions over the messaging app Telegram on how Bryce should be marketed, depending on the clientele. OnlyFans models whose fans tend to prefer the “girlfriend experience,” for instance, are told to talk up her authenticity: “Bryce is a real, fit girl who wants to get to know you
  • Like most platforms, OnlyFans suffers from a problem of incredible pay inequality, with the bulk of the profits concentrated in the bank accounts of the lucky few.
  • the top 1 percent of accounts made 33 percent of the money, and that most accounts took home less than $145 a month
  • Watching their partner have sex with someone else sometimes sparked what they called “classic little jealousy issues,” which Adams said they resolved with “more communication, more growing up.” The money was just too good. And over time, they adopted a self-affirming ideology that framed everything as just business. Things that were tough to do but got easier with practice, like shooting a sex scene, they called, in gym terms, “reps.” Things one may not want to do at first, but require some mental work to approach, became “self-limiting beliefs.”
  • They started hiring workers through friends and family, and what was once just Adams became a team effort, in which everyone was expected to workshop caption and video ideas. The group evaluated content under what Brian, who is 31, called a “triangulation method” that factored their comfort level with a piece of content alongside its engagement potential and “brand match.” Bryce the person gave way to Bryce the brand, a commercialized persona drafted by committee and refined for maximum marketability.
  • One of the operation’s most subtly critical components is a piece of software known as “the Tool,” which they developed and maintain in-house. The Tool scrapes and compiles every “like” and view on all of Adams’s social network accounts, every OnlyFans “fan action” and transaction, and every text, sext and chat message — more than 20 million lines of text so far.
  • It houses reams of customer data and a library of preset messages that Adams and her chatters can send to fans, helping to automate their reactions and flirtations — “an 80 percent template for a personalized response,” she said.
  • And it’s linked to a searchable database, in which hundreds of sex scenes are described in detail — by price, total sales, participants and general theme — and given a unique “stock keeping unit,” or SKU, much like the scannable codes on a grocery store shelf. If a fan says they like a certain sexual scenario, a team member can instantly surface any relevant scenes for an easy upsell. “Classic inventory chain,” Adams said.
  • The systemized database is especially handy for the young women of Adams’s chat team, known as the “girlfriends,” who work at a bench of laptops in the gym’s upper loft. The Tool helped “supercharge her messaging, which ended up, like, 3X-ing her output,” Brian said, meaning it tripled.
  • Keeping men talking is especially important because the chat window is where Adams’s team sends out their mass-message sales promotions, and the girlfriends never really know what to expect. One girlfriend said she’s had as many as four different sexting sessions going at once.
  • Adams employs a small team that helps her pay other OnlyFans creators to give away codes fans can use for free short-term trials. The team tracks redemption rates and promotional effectiveness in a voluminous spreadsheet, looking for guys who double up on discount codes, known as “stackers,” as well as bad bets and outright fraud.
  • Many OnlyFans creators don’t offer anything explicit, and the site has pushed to spotlight its stable of chefs, comedians and mountain bikers on a streaming channel, OFTV. But erotic content on the platform is inescapable; even some outwardly conventional creators shed their clothes behind the paywall
  • Creators with a more hardcore fan base, meanwhile, are told to cut to the chase: “300+ sex tapes & counting”; “Bryce doesn’t say no, she’s the most wild, authentic girl you will ever find.”
  • The $18 an hour she makes on the ad team, however, is increasingly dwarfed by the money Leigh makes from her personal OnlyFans account, where she sells sex scenes with her boyfriend for $10 a month. Leigh made $92,000 in gross sales in July, thanks largely to revenue from new fans who found her through Adams or the bikini videos Leigh posts to her 170,000-follower TikTok account
  • “This is a real job. You dedicate your time to it every single day. You’re always learning, you’re always doing new things,” she said. “I’d never thought I’d be good at business, but learning all these business tactics really empowers you. I have my own LLC; I don’t know any other 20-year-old right now that has their own LLC.”
  • The team is meeting all traffic goals, per their internal dashboard, which showed that through the day on a recent Thursday they’d gained 2,221,835 video plays, 19,707 landing-page clicks, 6,372 new OnlyFans subscribers and 9,024 new social-network followers. And to keep in shape, Adams and her boyfriend are abiding by a rigorous daily diet and workout plan
  • They eat the same Chick-fil-A salad at every lunch, track every calorie and pay a gym assistant to record data on every rep and weight of their exercise.
  • But the OnlyFans business is competitive, and it does not always feel to the couple like they’ve done enough. Their new personal challenge, they said, is to go viral on the other platforms as often as possible, largely through jokey TikTok clips and bikini videos that don’t give away too much.
  • the host told creators this sales-funnel technique was key to helping build the “cult of you”: “Someone’s fascination will become infatuation, which will make you a lot of money.”
  • Adams’s company has worked to reverse engineer the often-inscrutable art of virality, and Brian now estimates Adams makes about $5,000 in revenue for every million short-form video views she gets on TikTok.
  • Her team has begun ranking each platform by the amount of money they expect they can get from each viewer there, a metric they call “fan lifetime value.” (Subscribers who click through to her from Facebook tend to spend the most, the data show. Facebook declined to comment.)
  • The younger workers said they see the couple as mentors, and the two are constantly reminding them that the job of a creator is not a “lottery ticket” and requires a persistent grind. Whenever one complains about their lack of engagement, Brian said he responds, “When’s the last time you posted 60 different videos, 60 days in a row, on your Instagram Reels?”
  • But some have taken to it quite naturally. Rayna Rose, 19, was working last year at a hair salon, sweeping floors for $12 an hour, when an old high school classmate who worked with Adams asked whether she wanted to try OnlyFans and make $500 a video.
  • Rose started making videos and working as a chatter for $18 an hour but recently renegotiated her contract with Adams to focus more on her personal OnlyFans account, where she has nearly 30,000 fans, many of whom pay $10 a month.
  • One recent evening this summer, Adams was in the farm’s gym when her boyfriend told her he was headed to their guest room to record a collab with Rose, who was wearing a blue bikini top and braided pigtails.
  • “Go have fun,” Adams told them as they walked away. “Make good content.” The 15-minute video has so far sold more than 1,400 copies and accounted for more than $30,000 in sales.
  • Rose said she has lost friends due to her “lifestyle,” with one messaging her recently, “Can you imagine how successful you would be if you studied regularly and spent your time wisely?”
  • The message stung but, in Rose’s eyes, they didn’t understand her at all. She feels, for the first time, like she has a sense of purpose: She wants to be a full-time influencer. She expects to clear $200,000 in earnings this year and is now planning to move out of her parents’ house.
  • “I had no idea what I wanted to do with my life. And now I know,” she said. “I want to be big. I want to be, like, mainstream.”
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
1 - 12 of 12
Showing 20 items per page