Skip to main content

Home/ History Readings/ Group items tagged computers

Rss Feed Group items tagged

Javier E

Carlos Moreno Wanted to Improve Cities. Conspiracy Theorists Are Coming for Him. - The ... - 0 views

  • For most of his 40-year career, Carlos Moreno, a scientist and business professor in Paris, worked in relative peace.Many cities around the world embraced a concept he started to develop in 2010. Called the 15-minute city, the idea is that everyday destinations such as schools, stores and offices should be only a short walk or bike ride away from home. A group of nearly 100 mayors worldwide embraced it as a way to help recover from the pandemic.
  • In recent weeks, a deluge of rumors and distortions have taken aim at Mr. Moreno’s proposal. Driven in part by climate change deniers and backers of the QAnon conspiracy theory, false claims have circulated online, at protests and even in government hearings that 15-minute cities were a precursor to “climate change lockdowns” — urban “prison camps” in which residents’ movements would be surveilled and heavily restricted.
  • Many attacked Mr. Moreno, 63, directly. The professor, who teaches at the University of Paris 1 Panthéon-Sorbonne, faced harassment in online forums and over email. He was accused without evidence of being an agent of an invisible totalitarian world government. He was likened to criminals and dictators.
  • ...16 more annotations...
  • he started receiving death threats. People said they wished he and his family had been killed by drug lords, told him that “sooner or later your punishment will arrive” and proposed that he be nailed into a coffin or run over by a cement roller.
  • Mr. Moreno, who grew up in Colombia, began working as a researcher in a computer science and robotics lab in Paris in 1983; the career that followed involved creating a start-up, meeting the Dalai Lama and being named a knight of the Légion d’Honneur. His work has won several awards and spanned many fields — automotive, medical, nuclear, military, even home goods.
  • Many of the recent threats have been directed at scientists studying Covid-19. In a survey of 321 such scientists who had given media interviews, the journal Nature found that 22 percent had received threats of physical or sexual violence and 15 percent had received death threats
  • Last year, an Austrian doctor who was a vocal supporter of vaccines and a repeated target of threats died by suicide.
  • increasingly, even professors and researchers without much of a public persona have faced intimidation from extremists and conspiracy theorists.
  • Around 2010, he started thinking about how technology could help create sustainable cities. Eventually, he refined his ideas about “human smart cities” and “living cities” into his 2016 proposal for 15-minute cities.
  • The idea owes much to its many predecessors: “neighborhood units” and “garden cities” in the early 1900s, the community-focused urban planning pioneered by the activist Jane Jacobs in the 1960s, even support for “new urbanism” and walkable cities in the 1990s. So-called low-traffic neighborhoods, or LTNs, have been set up in several British cities over the past few decades.
  • Critics of 15-minute cities have been outspoken, arguing that a concept developed in Europe may not translate well to highly segregated American cities. A Harvard economist wrote in a blog post for the London School of Economics and Political Science in 2021 that the concept was a “dead end” that would exacerbate “enormous inequalities in cities” by subdividing without connecting them.
  • Jordan Peterson, a Canadian psychologist with four million Twitter followers, suggested that 15-minute cities were “perhaps the worst imaginable perversion” of the idea of walkable neighborhoods. He linked to a post about the “Great Reset,” an economic recovery plan proposed by the World Economic Forum that has spawned hordes of rumors about a pandemic-fueled plot to destroy capitalism.
  • A member of Britain’s Parliament said that 15-minute cities were “an international socialist concept” that would “cost us our personal freedoms.” QAnon supporters said the derailment of a train carrying hazardous chemicals in Ohio was an intentional move meant to push rural residents into 15-minute cities.
  • “Conspiracy-mongers have built a complete story: climate denialism, Covid-19, anti-vax, 5G controlling the brains of citizens, and the 15-minute city for introducing a perimeter for day-to-day life,” Mr. Moreno said. “This storytelling is totally insane, totally irrational for us, but it makes sense for them.”
  • The multipronged conspiracy theory quickly became “turbocharged” after the Oxford protest, said Jennie King, head of climate research and policy at the Institute for Strategic Dialogue, a think tank that studies online platforms.
  • “You have this snowball effect of a policy, which in principle was only going to affect a small urban population, getting extrapolated and becoming this crucible where far-right groups, industry-sponsored lobbying groups, conspiracist movements, anti-lockdown groups and more saw an opportunity to insert their worldview into the mainstream and to piggyback on the news cycle,”
  • The vitriol currently directed at Mr. Moreno and researchers like him mirrors “the broader erosion of trust in experts and institutions,”
  • Modern conspiracy theorists and extremists turn the people they disagree with into scapegoats for a vast array of societal ills, blaming them personally for causing the high cost of living or various health crises and creating an “us-versus-them” environment, she said.
  • “I am not a politician, I am not a candidate for anything — as a researcher, my duty is to explore and deepen my ideas with scientific methodology,” he said. “It is totally unbelievable that we could receive a death threat just for working as scientists.”
Javier E

Elon Musk, Other AI Experts Call for Pause in Technology's Development - WSJ - 0 views

  • Calls for a pause clash with a broad desire among tech companies and startups to double down on so-called generative AI, a technology capable of generating original content to human prompts. Buzz around generative AI exploded last fall after OpenAI unveiled a chatbot with its ability to perform functions like providing lengthy answers and producing computer code with humanlike sophistication. 
  • Microsoft has embraced the technology for its Bing search engine and other tools. Alphabet Inc.’s Google has deployed a rival system, and companies such as Adobe Inc., Zoom Video Communications Inc. and Salesforce Inc. have also introduced advanced AI tools.
  • “A race starts today,” Microsoft CEO Satya Nadella said last month. “We’re going to move, and move fast.”
  • ...8 more annotations...
  • “It is unfortunate to frame this as an arms race,” Mr. Tegmark said. “It is more of a suicide race. It doesn’t matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.” 
  • Messrs. Musk and Wozniak have both voiced concerns about AI technology. Mr. Musk on Wednesday tweeted that developers of the advanced AI technology “will not heed this warning, but at least it was said.”
  • Yann LeCun, chief AI scientist at Meta Platforms Inc., on Tuesday tweeted that he didn’t sign the letter because he disagreed with its premise. 
  • Mr. Mostaque, Stability AI’s CEO, said in a tweet Wednesday that although he signed the letter, he didn’t agree with a six-month pause. “It has no force but will kick off an important discussion that will hopefully bring more transparency & governance to an opaque area.”
  • Mr. Tegmark said many companies feel “crazy commercial pressures” to add advanced AI technology into their products. A six-month pause would allow the industry “breathing room,” without disadvantaging ones that opt to move carefully
  • The letter said a pause should be declared publicly and be verifiable and all key actors in the space should participate. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it said
  • AI labs and experts can use this time to develop a set of shared safety rules for advanced AI design that should be audited and overseen by outside experts, the authors wrote.
  • “I don’t think we can afford to just go forward and break things,” said Mr. Bengio, who shared a 2018 Turing award for inventing the systems that modern AI is built on. “We do need to take time to think through this collectively.”
Javier E

World must wake up to speed and scale of AI - 0 views

  • Unlike Einstein, who was urging the US to get ahead, these distinguished authors want everyone to slow down, and in a completely rational world that is what we would do
  • But, very much like the 1940s, that is not going to happen. Is the US, having gone to great trouble to deny China the most advanced semi-conductors necessary for cutting-edge AI, going to voluntarily slow itself down? Is China going to pause in its own urgent effort to compete? Putin observed six years ago that “whoever becomes leader in this sphere will rule the world”. We are now in a race that cannot be stopped.
  • Now we have to get used to capabilities that grow much, much faster, advancing radically in a matter of weeks. That is the real reason 1,100 experts have hit the panic button. Since the advent of Deep Learning by machines about ten years ago, the scale of “training compute” — think of this as the power of AI — has doubled every six months
  • ...11 more annotations...
  • If that continues, it will take five years, the length of a British parliament, for AI to become a thousand times more powerful
  • no one has yet determined how to solve the problem of “alignment” between AI and human values, or which human values those would be. Without that, says the leading US researcher Eliezer Yudkowsky, “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else”.
  • The rise of AI is almost certainly one of the two main events of our lifetimes, alongside the acceleration of climate change
  • open up a new age in which the most successful humans will merge their thinking intimately with that of machines
  • The stately world of making law and policy is about to be overtaken at great speed, as are many other aspects of life, work and what it means to be human when we are no longer the cleverest entity around.
  • what should we do about it in the UK? First, we have to ensure we, with allied nations, are among the leaders in this field. That will be a huge economic opportunity, but it is also a political and security imperative
  • Last week, ministers published five principles to inform responsible development of AI, and a light-touch regulatory regime to avoid the more prescriptive approach being adopted in the EU.
  • we will need much greater sovereign AI capabilities than currently envisaged. This should be done whatever the cost. Within a few years it will seem ridiculous that we are spending £100 billion on a railway line while being short of a few billion to be a world leader in supercomputing.
  • Before AI turns into AGI (artificial general intelligence) the UK has a second responsibility: to take the lead on seeking global agreements on the safe and responsible development of AI
  • even China should agree never to let AI come near the control of nuclear weapons or the creation of dangerous pathogens. The letter from the experts will not stop the AI race, but it should lead to more work on future safety and in parti
  • Last week, ministers said we should not fear AI. In reality, there is a lot to fear. But like an astronaut on a launch-pad, we should feel fear and excitement at the same time. This rocket is lifting off, it will accelerate, and we all need to prepare now.
Javier E

The Doctor Who Helped Take Down FTX in His Spare Time - The Atlantic - 0 views

  • Block, a vehement crypto skeptic, has spent the past 18 months doing forensic blockchain research. He uses open-source tools to follow flows of money between crypto companies, repeatedly demonstrating how shadow banks and nefarious scammers inflate the value of worthless assets in order to generate enormous wealth that exists only on paper.
  • And they produce nothing of value. There’s a reason these massive companies aren’t all using blockchain for their processes: It is incredibly inefficient
  • Block: There’s always stuff going on the blockchain, but these companies also have agreements off of the blockchain, right? Everything they have inside these exchanges is not on the blockchain. It’s using regular old database technology, and it’s not traceable at all. So yeah, a lot of the most important economic activity in crypto has nothing to do with blockchain at all. Huge percentages of people who do this kind of retail crypto trading, they don’t even know how to take what they bought off the exchange and put it in their own wallet.
  • ...5 more annotations...
  • Crypto takes this abstraction a step further, because there’s nothing linked to it at all. There’s no economic activity in this space. There’s nothing produced by these companies. In fact, it’s a negative-sum game because of the cost of running the blockchains alone—the computational cost is tremendous.
  • Crypto hides behind all this complexity, and people hear words like blockchain and get confused. You hear about decentralized networks and mining, and it sounds complicated. But you get right down to it, and it’s just a ledger. It’s just like somebody writing down numbers in a book, and it’s page after page of numbers. That’s all it is.
  • And realistically, who actually wants their financial information public and visible to everybody?
  • The vast majority of people who got involved in this have no interest related to the technology or in the political or ideological aspects of crypto. They just see an opportunity to get rich. And a lot of those people end up absorbing and parroting some of the crypto ideals back to you, but they don’t really care to understand what’s going on. It’s just their excuse for what they’ve already done, which is gamble on something they thought was going to make them wealthy.
  • I think most crypto companies are, like FTX, just borrowing from customer deposits to keep things afloat. And even the companies that aren’t doing that—I think Coinbase, for example, isn’t doing anything illicit, but their business model is based on this ecosystem where new money comes in. And that’s stopping.
Javier E

What Does Peter Thiel Want? - Persuasion - 0 views

  • Of the many wealthy donors working to shape the future of the Republican Party, none has inspired greater fascination, confusion, and anxiety than billionaire venture capitalist Peter Thiel. 
  • Thiel’s current outlook may well make him a danger to American democracy. But assessing the precise nature of that threat requires coming to terms with his ultimate aims—which have little to do with politics at all. 
  • Thiel is, first and foremost, a dynamist—someone who cares above all about fostering innovation, exploration, growth, and discovery.
  • ...21 more annotations...
  • It certainly informed his libertarianism, which inclined in the direction of an Ayn Rand-inspired valorization of entrepreneurial superman-geniuses whose great acts of capitalistic creativity benefit all of mankind. Thiel also tended to follow Rand in viewing the masses as moochers who empower Big Government to crush these superman-geniuses.
  • Thiel became something of an opportunistic populist inclined to view liberal elites and institutions as posing the greatest obstacle to building an economy and culture of dynamistic creativity—and eager to mobilize the anger and resentment of “the people” as a wrecking ball to knock them down. 
  • the failure of the Trump administration to break more decisively from the political status quo left Thiel uninterested in playing a big role in the 2020 election cycle.
  • Does Thiel personally believe that the 2020 election was stolen from Trump? I doubt it. It’s far more likely he supports the disruptive potential of encouraging election-denying candidates to run and helping them to win.
  • Thiel is moved to indignation by the fact that since 1958 no commercial aircraft (besides the long-decommissioned Concorde) has been developed that can fly faster than 977 kilometers per hou
  • Thiel and others point out that when we lift our gaze from our phones and related consumer products to the wider vistas of human endeavor—breakthroughs in medicine, the development of new energy sources, advances in the speed and ease of transportation, and the exploration of space—progress has indeed slowed to a crawl.
  • the present looks and feels pretty much the same as 1969, only “with faster computers and uglier cars.” 
  • Thiel’s approach to the problem is distinctive in that he sees the shortfall as evidence of a deeper and more profound moral, aesthetic, and even theological failure. Human beings are capable of great creativity and invention, and we once aspired to achieve it in every realm. But now that aspiration has been smothered by layer upon layer of regulation and risk-aversion. “Legal sclerosis,” Thiel claimed in that same book review, “is likely a bigger obstacle to the adoption of flying cars than any engineering problem.”
  • Progress in science and technology isn’t innate to human beings, Thiel believes. It’s an expression of a specific cultural or civilizational impulse that has its roots in Christianity and reached a high point during the Victorian era of Western imperialism
  • As Thiel put it last summer in a wide-ranging interview with the British website UnHerd, the Christian world “felt very expansive, both in terms of the literal empire and also in terms of the progress of knowledge, of science, of technology, and somehow that was naturally consonant with a certain Christian eschatology—a Christian vision of history.”
  • In Thiel’s view, recapturing civilizational greatness through scientific and technological achievement requires fostering a revival of a kind of Christian Prometheanism (a monotheistic variation on the rebellious creativity and innovation pursued by the demigod Prometheus in ancient Greek mythology)
  • Against those who portray modern scientific and technological progress as a rebellion against medieval Christianity, Thiel insists it is Christianity that encourages a metaphysical optimism about transforming and perfecting the world, with the ultimate goal of turning it into “a place where no accidents can happen” and the achievement of “personal immortality” becomes possible
  • All that’s required to reach this transhuman end is that we “remain open to an eschatological frame in which God works through us in building the kingdom of heaven today, here on Earth—in which the kingdom of heaven is both a future reality and something partially achievable in the present.” 
  • Thiel aims to undermine the progressive liberalism that dominates the mainstream media, the federal bureaucracy, the Justice Department, and the commanding heights of culture (in universities, think tanks, and other nonprofits).
  • JD Vance is quoted on the subject of what this political disruption might look like during a Trump presidential restoration in 2025. Vance suggests that Trump should “fire every single midlevel bureaucrat, every civil servant in the administrative state, replace them with our people. And when the courts stop [him], stand before the country, and say, ‘the chief justice has made his ruling. Now let him enforce it.’”
  • Another Thiel friend and confidante discussed at length in Vanity Fair, neo-reactionary Curtis Yarvin, takes the idea of disrupting the liberal order even further, suggesting various ways a future right-wing president (Trump or someone else) could shake things up, shredding the smothering blanket of liberal moralism, conformity, rules, and regulations, thereby encouraging the creation of something approaching a scientific-technological wild west, where innovation and experimentation rule the day. Yarvin’s preferred path to tearing down what he calls the liberal “Cathedral,” laid out in detail on a two-hour Claremont Institute podcast from May 2021, involves a Trump-like figure seizing dictatorial power in part by using a specially designed phone app to direct throngs of staunch supporters (Jan. 6-style) to overpower law enforcement at key locations around the nation’s capital.  
  • this isn’t just an example of guilt-by-association. These are members of Thiel’s inner circle, speaking publicly about ways of achieving shared goals. Thiel funded Vance’s Senate campaign to the tune of at least $15 million. Is it likely the candidate veered into right-wing radicalism with a Vanity Fair reporter in defiance of his campaign’s most crucial donor?
  • As for Yarvin, Thiel continued to back his tech start up (Urbit) after it became widely known he was the pseudonymous author behind the far-right blog “Unqualified Reservations,” and as others have shown, the political thinking of the two men has long overlapped in numerous other ways. 
  • He’s deploying his considerable resources to empower as many people and groups as he can, first, to win elections by leveraging popular disgust at corrupt institutions—and second, to use the power they acquire to dismantle or even topple those institutions, hopefully allowing a revived culture of Christian scientific-technological dynamism to arise from out of the ruins.  
  • Far more than most big political donors, Thiel appears to care only about the extra-political goal of his spending. How we get to a world of greater dynamism—whether it will merely require selective acts of troublemaking disruption, or whether, instead, it will ultimately involve smashing the political order of the United States to bits—doesn’t really concern him. Democratic politics itself—the effort of people with competing interests and clashing outlooks to share rule for the sake of stability and common flourishing—almost seems like an irritant and an afterthought to Peter Thiel.
  • What we do have is the opportunity to enlighten ourselves about what these would-be Masters of the Universe hope to accomplish—and to organize politically to prevent them from making a complete mess of things in the process.
Javier E

The new tech worldview | The Economist - 0 views

  • Sam Altman is almost supine
  • the 37-year-old entrepreneur looks about as laid-back as someone with a galloping mind ever could. Yet the ceo of OpenAi, a startup reportedly valued at nearly $20bn whose mission is to make artificial intelligence a force for good, is not one for light conversation
  • Joe Lonsdale, 40, is nothing like Mr Altman. He’s sitting in the heart of Silicon Valley, dressed in linen with his hair slicked back. The tech investor and entrepreneur, who has helped create four unicorns plus Palantir, a data-analytics firm worth around $15bn that works with soldiers and spooks
  • ...25 more annotations...
  • a “builder class”—a brains trust of youngish idealists, which includes Patrick Collison, co-founder of Stripe, a payments firm valued at $74bn, and other (mostly white and male) techies, who are posing questions that go far beyond the usual interests of Silicon Valley’s titans. They include the future of man and machine, the constraints on economic growth, and the nature of government.
  • They share other similarities. Business provided them with their clout, but doesn’t seem to satisfy their ambition
  • The number of techno-billionaires in America (Mr Collison included) has more than doubled in a decade.
  • ome of them, like the Medicis in medieval Florence, are keen to use their money to bankroll the intellectual ferment
  • The other is Paul Graham, co-founder of Y Combinator, a startup accelerator, whose essays on everything from cities to politics are considered required reading on tech campuses.
  • Mr Altman puts it more optimistically: “The iPhone and cloud computing enabled a Cambrian explosion of new technology. Some things went right and some went wrong. But one thing that went weirdly right is a lot of people got rich and said ‘OK, now what?’”
  • A belief that with money and brains they can reboot social progress is the essence of this new mindset, making it resolutely upbeat
  • The question is: are the rest of them further evidence of the tech industry’s hubristic decadence? Or do they reflect the start of a welcome capacity for renewal?
  • Two well-known entrepreneurs from that era provided the intellectual seed capital for some of today’s techno nerds.
  • Mr Thiel, a would-be libertarian philosopher and investor
  • This cohort of eggheads starts from common ground: frustration with what they see as sluggish progress in the world around them.
  • Yet the impact could ultimately be positive. Frustrations with a sluggish society have encouraged them to put their money and brains to work on problems from science funding and the redistribution of wealth to entirely new universities. Their exaltation of science may encourage a greater focus on hard tech
  • the rationalist movement has hit the mainstream. The result is a fascination with big ideas that its advocates believe goes beyond simply rose-tinted tech utopianism
  • A burgeoning example of this is “progress studies”, a movement that Mr Collison and Tyler Cowen, an economist and seer of the tech set, advocated for in an article in the Atlantic in 2019
  • Progress, they think, is a combination of economic, technological and cultural advancement—and deserves its own field of study
  • There are other examples of this expansive worldview. In an essay in 2021 Mr Altman set out a vision that he called “Moore’s Law for Everything”, based on similar logic to the semiconductor revolution. In it, he predicted that smart machines, building ever smarter replacements, would in the coming decades outcompete humans for work. This would create phenomenal wealth for some, obliterate wages for others, and require a vast overhaul of taxation and redistribution
  • His two bets, on OpenAI and nuclear fusion, have become fashionable of late—the former’s chatbot, ChatGPT, is all the rage. He has invested $375m in Helion, a company that aims to build a fusion reactor.
  • Mr Lonsdale, who shares a libertarian streak with Mr Thiel, has focused attention on trying to fix the shortcomings of society and government. In an essay this year called “In Defence of Us”, he argues against “historical nihilism”, or an excessive focus on the failures of the West.
  • With a soft spot for Roman philosophy, he has created the Cicero Institute in Austin that aims to inject free-market principles such as competition and transparency into public policy.
  • He is also bringing the startup culture to academia, backing a new place of learning called the University of Austin, which emphasises free speech.
  • All three have business ties to their mentors. As a teen, Mr Altman was part of the first cohort of founders in Mr Graham’s Y Combinator, which went on to back successes such as Airbnb and Dropbox. In 2014 he replaced him as its president, and for a while counted Mr Thiel as a partner (Mr Altman keeps an original manuscript of Mr Thiel’s book “Zero to One” in his library). Mr Thiel was also an early backer of Stripe, founded by Mr Collison and his brother, John. Mr Graham saw promise in Patrick Collison while the latter was still at school. He was soon invited to join Y Combinator. Mr Graham remains a fan: “If you dropped Patrick on a desert island, he would figure out how to reproduce the Industrial Revolution,”
  • While at university, Mr Lonsdale edited the Stanford Review, a contrarian publication co-founded by Mr Thiel. He went on to work for his mentor and the two men eventually helped found Palantir. He still calls Mr Thiel “a genius”—though he claims these days to be less “cynical” than his guru.
  • “The tech industry has always told these grand stories about itself,” says Adrian Daub of Stanford University and author of the book, “What Tech Calls Thinking”. Mr Daub sees it as a way of convincing recruits and investors to bet on their risky projects. “It’s incredibly convenient for their business models.”
  • In the 2000s Mr Thiel supported the emergence of a small community of online bloggers, self-named the “rationalists”, who were focused on removing cognitive biases from thinking (Mr Thiel has since distanced himself). That intellectual heritage dates even further back, to “cypherpunks”, who noodled about cryptography, as well as “extropians”, who believed in improving the human condition through life extensions
  • Silicon Valley has shown an uncanny ability to reinvent itself in the past.
Javier E

See How Real AI-Generated Images Have Become - The New York Times - 0 views

  • The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.
  • The advancements are already fueling disinformation and being used to stoke political divisions
  • Last month, some people fell for images showing Pope Francis donning a puffy Balenciaga jacket and an earthquake devastating the Pacific Northwest, even though neither of those events had occurred. The images had been created using Midjourney, a popular image generator.
  • ...16 more annotations...
  • Authoritarian governments have created seemingly realistic news broadcasters to advance their political goals
  • Experts fear the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?
  • “The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” said Wasim Khaled, chief executive of Blackbird.AI, a company that helps clients fight disinformation.
  • Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction. Plug in a text description, and the technology can produce a related image — no special skills required.
  • Midjourney’s images, he said, were able to pass muster in facial-recognition programs that Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It’s not hard to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies.
  • In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to train the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that Stable Diffusion diluted the value of the Getty watermark by incorporating it into images that ranged “from the bizarre to the grotesque.”
  • Getty’s lawsuit reflects concerns raised by many individual artists — that A.I. companies are becoming a competitive threat by copying content they do not have permission to use.
  • Trademark violations have also become a concern: Artificially generated images have replicated NBC’s peacock logo, though with unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra O’s looped into the name.
  • The threat to photographers is fast outpacing the development of legal protections, said Mickey H. Osterreicher, general counsel for the National Press Photographers Association
  • Newsrooms will increasingly struggle to authenticate conten
  • Social media users are ignoring labels that clearly identify images as artificially generated, choosing to believe they are real photographs, he said.
  • The video explained that the deepfake had been created, with Ms. Schick’s consent, by the Dutch company Revel.ai and Truepic, a California company that is exploring broader digital content verification
  • The companies described their video, which features a stamp identifying it as computer-generated, as the “first digitally transparent deepfake.” The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software.
  • The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving A.I. images.
  • “The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, chief executive of Truepic
  • Adobe unveiled its own image-generating product, Firefly, which will be trained using only images that were licensed or from its own stock or no longer under copyright. Dana Rao, the company’s chief trust officer, said on its website that the tool would automatically add content credentials — “like a nutrition label for imaging” — that identified how an image had been made. Adobe said it also planned to compensate contributors.
Javier E

Yes, People Will Pay $27,500 for an Old 'Rocky' Tape. Here's Why. - The New York Times - 0 views

  • When Mr. Carlson first began to look for sealed VHS cassettes, they were considered so much plastic trash. “Back to the Future,” “The Goonies,” “Blade Runner,” were about $20 each on eBay. He put them on a shelf, little windows into his past, and started an Instagram account called Rare and Sealed.
  • The current cultural tumult, with its boom in fake images, endless arguments over everything and now the debut of imperious A.I. chatbots, increases the appeal of things that can’t be plugged in.
  • One thing people are eagerly seeking with the new technology is old technology. Cormac McCarthy’s typewriter, which he used to write a shelf of important novels, went for a quarter-million dollars. An Apple 1 computer fetched nearly twice that. A first-generation iPhone, still sealed in its box, sold for $21,000 in December and triple that in February.
  • ...6 more annotations...
  • Blend these factors — a desire for escape from our virtual lives; bidding as fast as pushing a button; and the promotion of new collecting fields like outdated technology devices — and you have Heritage Auctions in Dallas.
  • Heritage is a whirlwind of activity, of passion, of hype, constantly trying new ways of enticing people to own something beautiful and useless. Ninety-one million Americans, according to U.S. Census Bureau surveys, are having trouble paying household bills. Everyone else is a potential bidder.
  • Twenty years ago, Heritage had four categories: coins, comics, movie posters and sports. Now it has more than 50, which generated revenue of $1.4 billion last year. Everything, at least in theory, is collectible.
  • “We don’t question the value or legitimacy of a particular subject matter relative to outmoded norms,” Mr. Benesh said. “We’re not here to tell you what’s worthwhile. The marketplace will tell you. The bidders” — Heritage has 1.6 million — “will tell you.”
  • In mid-2020, the privately held company moved to a 160,000-square-foot building by Dallas-Fort Worth International Airport, doubling the size of its former headquarters. Hundreds of specialists, most of them collectors themselves, prepare hundreds of thousands of items for bids here — researching, photographing, writing catalog copy.
  • The problem is, older historical items that were previously unknown are becoming rare. Every barn, basement and attic has been ransacked for treasures. New items related to Washington or Lincoln, for instance, are nearly impossible to find.
Javier E

AI in Politics Is So Much Bigger Than Deepfakes - The Atlantic - 0 views

  • “Deepfakes have been the next big problem coming in the next six months for about four years now,” Joshua Tucker, a co-director of the NYU Center for Social Media and Politics, told m
  • Academic research suggests that disinformation may constitute a relatively small proportion of the average American’s news intake, that it’s concentrated among a small minority of people, and that, given how polarized the country already is, it probably doesn’t change many minds.
  • If the first-order worry is that people will get duped, the second-order worry is that the fear of deepfakes will lead people to distrust everything.
  • ...12 more annotations...
  • Researchers call this effect “the liar’s dividend,” and politicians have already tried to cast off unfavorable clips as AI-generated: Last month, Donald Trump falsely claimed that an attack ad had used AI to make him look bad.
  • “Deepfake” could become the “fake news” of 2024, an infrequent but genuine phenomenon that gets co-opted as a means of discrediting the truth
  • Steve Bannon’s infamous assertion that the way to discredit the media is to “flood the zone with shit.”
  • AI is less likely to create new dynamics than to amplify existing ones. Presidential campaigns, with their bottomless coffers and sprawling staff, have long had the ability to target specific groups of voters with tailored messaging
  • They might have thousands of data points about who you are, obtained by gathering information from public records, social-media profiles, and commercial brokers
  • “It is now so cheap to engage in this mass personalization,” Laura Edelson, a computer-science professor at Northeastern University who studies misinformation and disinformation, told me. “It’s going to make this content easier to create, cheaper to create, and put more communities within the reach of it.”
  • That sheer ease could overwhelm democracies’ already-vulnerable election infrastructure. Local- and state-election workers have been under attack since 2020, and AI could make things worse.
  • Those officials have also expressed the worry, he said, that generative AI will turbocharge the harassment they face, by making the act of writing and sending hate mail virtually effortless. (The consequences may be particularly severe for women.)
  • past attacks—most notably the Russian hack of John Podesta’s email, in 2016—have wrought utter havoc. But now pretty much anyone—whatever language they speak and whatever their writing ability—can send out hundreds of phishing emails in fluent English prose. “The cybersecurity implications of AI for elections and electoral integrity probably aren’t getting nearly the focus that they should,”
  • Just last week, AI-generated audio surfaced of one Harlem politician criticizing another. New York City has perhaps the most robust local-news ecosystem of any city in America, but elsewhere, in communities without the media scrutiny and fact-checking apparatuses that exist at the national level, audio like this could cause greater chaos.
  • In countries that speak languages with less online text for LLMs to gobble up, AI tools may be less sophisticated. But those same countries are likely the ones where tech platforms will pay the least attention to the spread of deepfakes and other disinformation, Edelson told me. India, Russia, the U.S., the EU—this is where platforms will focus. “Everything else”—Namibia, Uzbekistan, Uruguay—“is going to be an afterthought,”
  • Most of us tend to fret about the potential fake video that deceives half of the nation, not about the flood of FOIA requests already burying election officials. If there is a cost to that way of thinking, the world may pay it this year at the polls.
Javier E

What Was Apple Thinking With Its New iPad Commercial? - The Atlantic - 0 views

  • The notion behind the commercial is fairly obvious. Apple wants to show you that the bulk of human ingenuity and history can be compressed into an iPad, and thereby wants you to believe that the device is a desirable entry point to both the consumption of culture and the creation of it.
  • Most important, it wants you to know that the iPad is powerful and quite thin.
  • But good Lord, Apple, read the room. In its swing for spectacle, the ad lacks so much self-awareness, it’s cringey, even depressing.
  • ...13 more annotations...
  • This is May 2024: Humanity is in the early stages of a standoff with generative AI, which offers methods through which visual art, writing, music, and computer code can be created by a machine in seconds with the simplest of prompts
  • Most of us are still in the sizing-up phase for generative AI, staring warily at a technology that’s been hyped as world-changing and job-disrupting (even, some proponents argue, potentially civilization-ending), and been foisted on the public in a very short period of time. It’s a weird, exhausting, exciting, even tense moment. Enter: THE CRUSHER.
  • There is about a zero percent chance that the company did not understand the optics of releasing this ad at this moment. Apple is among the most sophisticated and moneyed corporations in all the world.
  • this time, it’s hard to like what the company is showing us. People are angry. One commenter on X called the ad “heartbreaking.
  • Although watching things explode might be fun, it’s less fun when a multitrillion-dollar tech corporation is the one destroying tools, instruments, and other objects of human expression and creativity.
  • Apple is a great technology company, but it is a legendary marketer. Its ads, its slickly produced keynotes, and even its retail stores succeed because they offer a vision of the company’s products as tools that give us, the consumers, power.
  • The third-order annoyance is in the genre. Apple has essentially aped a popular format of “crushing” videos on TikTok, wherein hydraulic presses are employed to obliterate everyday objects for the pleasure of idle scrollers.
  • It’s unclear whether some of the ad might have been created with CGI, but Apple could easily round up tens of thousands of dollars of expensive equipment and destroy it all on a whim. However small, the ad is a symbol of the company’s dominance.
  • The iPad was one of Steve Jobs’s final products, one he believed could become as popular and perhaps as transformative as cars. That vision hasn’t panned out. The iPad hasn’t killed books, televisions, or even the iPhone
  • The iPad is, potentially, a creative tool. It’s also an expensive luxury device whose cheaper iterations, at least, are vessels for letting your kid watch Cocomelon so they don’t melt down in public, reading self-help books on a plane, or opting for more pixels and better resolution whilst consuming content on the toilet.
  • Odds are, people aren’t really furious at Apple on behalf of the trumpeters—they’re mad because the ad says something about the balance of power
  • it is easy to be aghast at the idea that AI will wipe out human creativity with cheap synthetic waste.
  • The fundamental flaw of Apple’s commercial is that it is a display of force that reminds us about this sleight of hand. We are not the powerful entity in this relationship. The creative potential we feel when we pick up one of their shiny devices is actually on loan. At the end of the day, it belongs to Apple, the destroyer.
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

I tried out an Apple Vision Pro. It frightened me | Arwa Mahdawi | The Guardian - 0 views

  • Despite all the marketed use cases, the most impressive aspect of it is the immersive video
  • Watching a movie, however, feels like you’ve been transported into the content.
  • that raises serious questions about how we perceive the world and what we consider reality. Big tech companies are desperate to rush this technology out but it’s not clear how much they’ve been worrying about the consequences.
  • ...10 more annotations...
  • it is clear that its widespread adoption is a matter of when, not if. There is no debate that we are moving towards a world where “real life” and digital technology seamlessly blur
  • Over the years there have been multiple reports of people being harassed and even “raped” in the metaverse: an experience that feels scarily real because of how immersive virtual reality is. As the lines between real life and the digital world blur to a point that they are almost indistinguishable, will there be a meaningful difference between online assault and an attack in real life?
  • more broadly, spatial computing is going to alter what we consider reality
  • Researchers from Stanford and Michigan University recently undertook a study on the Vision Pro and other “passthrough” headsets (that’s the technical term for the feature which brings VR content into your real-world surrounding so you see what’s around you while using the device) and emerged with some stark warnings about how this tech might rewire our brains and “interfere with social connection”.
  • These headsets essentially give us all our private worlds and rewrite the idea of a shared reality. The cameras through which you see the world can edit your environment – you can walk to the shops wearing it, for example, and it might delete all the homeless people from your view and make the sky brighter.
  • “What we’re about to experience is, using these headsets in public, common ground disappears,”
  • “People will be in the same physical place, experiencing simultaneous, visually different versions of the world. We’re going to lose common ground.”
  • It’s not just the fact that our perception of reality might be altered that’s scary: it’s the fact that a small number of companies will have so much control over how we see the world. Think about how much influence big tech already has when it comes to content we see, and then multiply that a million times over. You think deepfakes are scary? Wait until they seem even more realistic.
  • We’re seeing a global rise of authoritarianism. If we’re not careful this sort of technology is going to massively accelerate it.
  • Being able to suck people into an alternate universe, numb them with entertainment, and dictate how they see reality? That’s an authoritarian’s dream. We’re entering an age where people can be mollified and manipulated like never before
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say - WSJ - 0 views

  • Japan’s largest telecommunications company and the country’s biggest newspaper called for speedy legislation to restrain generative artificial intelligence, saying democracy and social order could collapse if AI is left unchecked.
  • the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.
  • The Japanese companies’ manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology
  • ...8 more annotations...
  • Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users’ attention without regard to morals or accuracy.
  • Unless AI is restrained, “in the worst-case scenario, democracy and social order could collapse, resulting in wars,” the manifesto said.
  • It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.
  • The Biden administration is also stepping up oversight, invoking emergency federal powers last October to compel major AI companies to notify the government when developing systems that pose a serious risk to national security. The U.S., U.K. and Japan have each set up government-led AI safety institutes to help develop AI guidelines.
  • NTT and Yomiuri said their manifesto was motivated by concern over public discourse. The two companies are among Japan’s most influential in policy. The government still owns about one-third of NTT, formerly the state-controlled phone monopoly.
  • Yomiuri Shimbun, which has a morning circulation of about six million copies according to industry figures, is Japan’s most widely-read newspaper. Under the late Prime Minister Shinzo Abe and his successors, the newspaper’s conservative editorial line has been influential in pushing the ruling Liberal Democratic Party to expand military spending and deepen the nation’s alliance with the U.S.
  • The Yomiuri’s news pages and editorials frequently highlight concerns about artificial intelligence. An editorial in December, noting the rush of new AI products coming from U.S. tech companies, said “AI models could teach people how to make weapons or spread discriminatory ideas.” It cited risks from sophisticated fake videos purporting to show politicians speaking.
  • NTT is active in AI research, and its units offer generative AI products to business customers. In March, it started offering these customers a large-language model it calls “tsuzumi” which is akin to OpenAI’s ChatGPT but is designed to use less computing power and work better in Japanese-language contexts.
Javier E

He Turned 55. Then He Started the World's Most Important Company. - WSJ - 0 views

  • You probably use a device with a chip made by TSMC every day, but TSMC does not actually design or market those chips. That would have sounded completely absurd before the existence of TSMC. Back then, companies designed chips that they manufactured themselves. Chang’s radical idea for a great semiconductor company was one that would exclusively manufacture chips that its customers designed. By not designing or selling its own chips, TSMC never competed with its own clients. In exchange, they wouldn’t have to bother running their own fabrication plants, or fabs, the expensive and dizzyingly sophisticated facilities where circuits are carved on silicon wafers.
  • The innovative business model behind his chip foundry would transform the industry and make TSMC indispensable to the global economy. Now it’s the company that Americans rely on the most but know the least about
  • I wanted to know more about his decision to start a new company when he could have stopped working altogether. What I discovered was that his age was one of his assets. Only someone with his experience and expertise could have possibly executed his plan for TSMC. 
  • ...30 more annotations...
  • “I could not have done it sooner,” he says. “I don’t think anybody could have done it sooner. Because I was the first one.” 
  • By the late 1960s, he was managing TI’s integrated-circuit division. Before long, he was running the entire semiconductor group. 
  • He transferred to the Massachusetts Institute of Technology, where he studied mechanical engineering, earned his master’s degree and would have stayed for his Ph.D. if he hadn’t failed the qualifying exam. Instead, he got his first job in semiconductors and moved to Texas Instruments in 1958
  • he came along as the integrated circuit was being invented, and his timing couldn’t have been any better, as Chang belonged to the first generation of semiconductor geeks. He developed a reputation as a tenacious manager who could wring every possible improvement out of production lines, which put his career on the fast track.
  • Chang grew up dreaming of being a writer—a novelist, maybe a journalist—and he planned to major in English literature at Harvard University. But after his freshman year, he decided that what he actually wanted was a good job
  • “They talk about life-work balance,” he says. “That’s a term I didn’t even know when I was their age. Work-life balance. When I was their age, if there was no work, there was no life.” 
  • These days, TSMC is investing $40 billion to build plants in Arizona, but the project has been stymied by delays, setbacks and labor shortages, and Chang told me that some of TSMC’s young employees in the U.S. have attitudes toward work that he struggles to understand. 
  • Chang says he wouldn’t have taken the risk of moving to Taiwan if he weren’t financially secure. In fact, he didn’t take that same risk the first time he could have.
  • “The closer the industry match,” they wrote, “the greater the success rate.” 
  • By then, Chang knew that he wasn’t long for Texas Instruments. But his stock options hadn’t vested, so he turned down the invitation to Taiwan. “I was not financially secure yet,” he says. “I was never after great wealth. I was only after financial security.” For this corporate executive in the middle of the 1980s, financial security equated to $200,000 a year. “After tax, of course,” he says. 
  • Chang’s situation had changed by the time Li called again three years later. He’d exercised a few million dollars of stock options and bought tax-exempt municipal bonds that paid enough for him to be financially secure by his living standards. Once he’d achieved that goal, he was ready to pursue another one. 
  • “There was no certainty at all that Taiwan would give me the chance to build a great semiconductor company, but the possibility existed, and it was the only possibility for me,” Chang says. “That’s why I went to Taiwan.” 
  • Not long ago, a team of economists investigated whether older entrepreneurs are more successful than younger ones. By scrutinizing Census Bureau records and freshly available Internal Revenue Service data, they were able to identify 2.7 million founders in the U.S. who started companies between 2007 and 2014. Then they looked at their ages.
  • The average age of those entrepreneurs at the founding of their companies was 41.9. For the fastest-growing companies, that number was 45. The economists also determined that 50-year-old founders were almost twice as likely to achieve major success as 30-year-old founders, while the founders with the lowest chance of success were the ones in their early 20s
  • “Successful entrepreneurs are middle-aged, not young,” they wrote in their 2020 paper.  
  • Silicon Valley’s venture capitalists throw money at talented young entrepreneurs in the hopes they will start the next trillion-dollar company. They have plentiful energy, insatiable ambition and the vision to peek around corners and see the future. What they don’t typically have are mortgages, family obligations and other adult responsibilities to distract them or diminish their appetite for risk. Chang himself says that younger people are more innovative when it comes to science and technical subjects. 
  • But in business, older is better. Entrepreneurs in their 40s and 50s may not have the exuberance to believe they will change the world, but they have the experience to know how they actually can. Some need years of specialized training before they can start a company. In biotechnology, for example, founders are more likely to be college professors than college dropouts. Others require the lessons and connections they accumulate over the course of their careers. 
  • one more finding from their study of U.S. companies that helps explain the success of a chip maker in Taiwan. It was that prior employment in the area of their startups—both the general sector and specific industry—predicted “a vastly higher probability” of success.
  • Chang was such a workaholic that he made sales calls on his honeymoon and had no patience for those who didn’t share his drive
  • Morris Chang had 30 years of experience in his industry when he decided to uproot his life and move to another continent. He knew more about semiconductors than just about anyone on earth—and certainly more than anyone in Taiwan. As soon as he started his job at the Industrial Technology Research Institute, Chang was summoned to K.T. Li’s office and given a second job. “He felt I should start a semiconductor company in Taiwan,”
  • “I decided right away that this could not be the kind of great company that I wanted to build at either Texas Instruments or General Instrument,”
  • TI handled every part of chip production, but what worked in Texas would not translate to Taiwan. The only way that he could build a great company in his new home was to make a new sort of company altogether, one with a business model that would exploit the country’s strengths and mitigate its many weaknesses.
  • Chang determined that Taiwan had precisely one strength in the chip supply chain. The research firm that he was now running had been experimenting with semiconductors for the previous 10 years. When he studied that decade of data, Chang was pleasantly surprised by Taiwan’s yields, the percentage of working chips on silicon wafers. They were almost twice as high in Taiwan as they were in the U.S., he said. 
  • “People were ingrained in thinking the secret sauce of a successful semiconductor company was in the wafer fab,” Campbell told me. “The transition to the fabless semiconductor model was actually pretty obvious when you thought about it. But it was so against the prevailing wisdom that many people didn’t think about it.” 
  • Taiwan’s government took a 48% stake, with the rest of the funding coming from the Dutch electronics giant Philips and Taiwan’s private sector, but Chang was the driving force behind the company. The insight to build TSMC around such an unconventional business model was born from his experience, contacts and expertise. He understood his industry deeply enough to disrupt it. 
  • “TSMC was a business-model innovation,” Chang says. “For innovations of that kind, I think people of a more advanced age are perhaps even more capable than people of a younger age.”
  • the personal philosophy that he’d developed over the course of his long career. “To be a partner to our customers,” he says. That founding principle from 1987 is the bedrock of the foundry business to this day, as TSMC says the key to its success has always been enabling the success of its customers.  
  • TSMC manufactures chips in iPhones, iPads and Mac computers for Apple, which manufactures a quarter of TSMC’s net revenue. Nvidia is often called a chip maker, which is curious, because it doesn’t make chips. TSMC does. 
  • Churning out identical copies of a single chip for an iPhone requires one TSMC fab to produce more than a quintillion transistors—that is, one million trillions—every few months. In a year, the entire semiconductor industry produces “more transistors than the combined quantity of all goods produced by all other companies, in all other industries, in all human history,” Miller writes. 
  • I asked how he thought about success when he moved to Taiwan. “The highest degree of success in 1985, according to me, was to build a great company. A lower degree of success was at least to do something that I liked to do and I wanted to do,” he says. “I happened to achieve the highest degree of success that I had in mind.” 
Javier E

Bernanke review is not about blame but the Bank's outdated practices - 0 views

  • Bernanke’s 80-page assessment, the result of more than seven months’ work, is the most comprehensive independent analysis of a big central bank’s performance since an inflationary crisis hit the world economy in early 2022. He offers a dozen recommendations for change at the Bank, the strongest of which is for the MPC to begin publishing “alternative scenarios” that show how its inflation forecasts stand up in extreme situations, for example in the face of an energy price shock.
  • The review lays bare how the Bank and its international peers all failed to model the impact of the huge energy price shock that followed Russia’s invasion of Ukraine in early 2022, the disruption in global trade during the pandemic after 2020 and how workers and companies would respond to significant price changes.
  • In choosing Bernanke, one of the most respected central bankers of his generation, to lead the review, the Bank has ensured that his findings will be difficult to ignore. The former Fed chairman carried out more than 60 face-to-face interviews with Bank staff and market participants and sat in on the MPC’s November 2023 forecasting round to assess where the Bank’s forecasts and communication were failing short, from the use of computer models to the role played by “human judgment”.
  • ...1 more annotation...
  • In his review, Bernanke compared the MPC’s forecasting record with six other central banks — in the Nordic countries, New Zealand, the United States and the eurozone — and found the Bank was particularly bad at understanding dynamics in the jobs market and had consistently forecast far higher unemployment, which had not materialised. Its other errors, on forecasting future inflation and growth, put it largely in the “middle of the pack” with its peers.
« First ‹ Previous 361 - 376 of 376
Showing 20 items per page